Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.
- Invest regularly
- Diversify for long term success
- Balance Conservative and high risk/high reward investments
- Investors aim to buy low and sell high (emerging tech)
- Portfolio's should be reviewed and re-evaluated regularly
- Learn one new language every year (this year — Python)
- Read a technical book each month
- Participate in User Groups
- Experiment with different environments (atm - shell and markdown)
- Stay Current (Syntax)
- 5 why's
- Who benefits?
- What's the context?
- When or Where would this work?
- Why is this a problem?
- Keep code decoupled. More later.
- Avoid global data. You can mitigate this by passing context into modules or as parameters in React. So redux stores app level data, but you mitigate this by only requesting what you need.
- Avoid similar functions.
- Architecture
- New functionality
- Structure or contents of external data
- Third party tools or components
- Performance issues
- User Interface Design
- Correctness
- Completeness (limited functions)
- Robustness (minimal error checking)
- Style (code style and documentation)
- Insurance against obsolescence
- Leverage existing tools
- Easier testing
- Building the book
- Code inclusion and highlighting
- Website updates
- Including equations
- Index generator
- Set up GA4 at analytics.google.com
- Take your GA4 ID over to Tag Manager and create a new GA4 Config Tag.
- Use that config tag in your new custom events.
- Provide the Request URL for your API
- Create a New Shortcut "Suggestion Box"
- If loading data for select menus, provide an API URL for that as well.
- Instantiate your App with Slack Bolt
- Write methods responding to your shortcut callback ID
- Handle submissions.
- Redux stores both Application State and Fetched Data
- Redux Thunks are used to asynchronously fetch data from our Sanity API
- We hope nothing goes wrong in between!
- A Redux action for storing the data
- A query method that wraps around our Sanity GROQ request
- A way of handling errors and missing data
- An easy way to call multiple queries at once
- Hosting your fonts
- Converting font to modern .woff2 format if not already
- Caching fonts in a CDN
- In the future 🪐: using F-mods, a method for matching the fallback font dimensions with the designed font
The Pragmatic Programmer by Andy Hunt and Dave Thomas
I kept thorough notes while reading The Pragmatic Programmer. This isn't a review so much as a public sharing of those notes! To serve as a refference for present you and future me.
A Pragmatic Philosophy
Software Entropy
Entropy = level of disorder in a system. The universe works towards maximum entropy.
Broken Windows are the first sign of entropy. When one thing is out of place and not fixed, the rest of the neighborhood goes.
When adding code, do no harm.
Technical debt = rot. Same topic.
Stone Soup and Boiled Frogs
Ask for forgiveness, not permission. Be a catalyst for change.
Show success before asking for help.
Remember the Big Picture.
Maintain awareness around you. A la Navy SEALS.
Good-Enough Software
The scope and quality of your software should be a part of the discussion when planning for it. With clients, talk about tradeoffs. Don't aim for perfection every time. Know when to ship good-enough software. Again, discuss this with the client. It's not all up to you.
Example: SSR and React Portal aren't playing nice. Do the research to discuss solutions. Leave the decision to client for whether or not this should stop us from shipping the code.
Your Knowledge Portfolio
Investing in your knowledge and experience is your most valuable asset. Stagnating will mean the industry will pass you by.
Serious investors:
Suggested Goals:
It doesn't matter if you use this tech on a project or not - the engagement with new ideas and ways of doing things will change how you program.
Think critically. Be mindful of weather or not something is valuable to place in the knowledge portfolio. Consider:
Go far: If you are in implementation, find a book on design.
A Pragmatic Approach
The Essence of Good Design
ETC — Make everything Easy To Change. We can't predict the needs of the future, so mainain flexibility in design now. That means modularity, decoupling, and single sources of truth.
DRY — The Evils of Duplication
DRY Don't repeat yourself. This is more nuanced than "Don't Copy/Paste"
Maintenance is not done after a project is completed, it is a continual part of the process. You are a gardener, continue to garden and maintain.
DRY Is maintaining so that every piece of knowledge has a single, unambiguous, authoritative representation within the system.
Example: Regions stored in the DB.
GraphQL is a brilliant implementation of DRY - It's self documenting and APIs are automatically generated.
def validate_age(val):
validate_type(val)
validate_min_integer(val)
def validate_quantity(val):
validate_type(val)
validate_min_integer(val)This does not violate the DRY principle because these are separate pieces of knowledge. They use the same code (think of CSS copying), but they don't need to share the same function. One validates age, one validates quantity. We keep it ETC by keeping these procedures separate, even if they use the same code.
Documentation is often duplication. Write readable code, and you won't have to worry about documenting.
DRY in Data can often be mitigated through calculation.
You don't need to store the averageRent, just the rent prices. You can break this rule, so long as you keep it close to the module. Make it so that when a value changes, calculations are done to update it.
A general rule for Classes and modular coding is to make any outside endpoints an accessor or setting function as opposed to exposing access to the metal. By doing this, you make it easier to add adjustments to those methods (setting a value can allow for later triggering off other internal methods. Getting methods allow you to obfuscate if the value is calculated or directly accessed, it shouldn't matter either way.
Inter-developer Duplication
Keeping clear communication among teams will help keep from code duplication.
Orthogonality
^
|
|
|
__________>Two lines are orthogonal if they can move in their direction without going into the other axis. So an X/Y axis is orthogonal because no movement in their direction requires a change in another axis.
This is an ideal in our code. It's not necessarily achievable to perfection, but getting 80% there is a goal. The author's note that in reality, most real-world requirements will require multiple function changes in the system. In an orthological system, though, it's only one module within those functions that changes. That's the scope of it.
A helicopter is a non orthogonal system, requiring regular balancing.
Benefits include a boost in productivity, flexibility, and simplicity.
You also reduce the risk of one change ruining another part of the code.
You know this as component-based design.
Even in design, consider the orthogonality. Is your system for user id's orthogonal if your user id is their phone number? No!
Be mindful of third party libraries in orthogonal systems. If you need to access objects in a special way with other libraries, it's likely not orthogonal. At the very least, wrap the handler in something that can isolate that logic.
Coding
What to do this while coding:
Reversibility
There are no final decisions
We can't rely on the same vendors over time. To mitigate this, hide third-party APIs behind your own abstraction layers. Break your code into components, even if you deploy to a single server. This mirror's Wes Bos' advice to, when working with server code, write the function itself, then write a handler that imports that code and runs it.
Forgo Following Fads
Tracer Bullets
An approach that is not the same as Prototyping. The means of tracer bullets is to find the target while laying down the skeleton for your project.
An example: Getting a "hello, world" app up that utilizes many different systems together.
Tracer bullets don't always hit their target, get accustomed to the fact that they most likely won't up front. Using light weight code makes it easier to adapt.
Prototyping and Post It Notes
Prototyping by contrast is a throw away. It can include high level code, or not. It can be post it notes and still images, or even just drawing on a white board!
You can prototype:
Again, many of these solutions are fine on a white board, or you can code something up that's more involved for testing.
You can forget about:
Communicate that this code is meant to be thrown away. You may be better of with tracer bullets if your management is likely to want to deploy this.
Domain Languages
Internal Language
That using a programming language primarily as its means of communication. React and Jest are good examples of this.
The strength here is that you have a lot of flexibility with the language. You can use the language to create several tests automatically, for example.
External Language
That using a meta-language, requiring a parser to implement. JSON, YAML, and CSV are good examples of this. They contain information and data, but needs parsing to turn into action. The most extreme example is an application that uses its own custom language (GROQ is an example of this). If there is a client using your product, use this and reach for off the shelf external language solutions (JSON, YAML, CSV for client products)
Mix of both
Using methods and functions are a good in between. Jest uses functions (do, if, case) that have their own language and "syntax", but are, at the end of the day, functions. This is most ideal in most cases if programmers are using your solution.
test('two plus two', () => {
const value = 2 + 2;
expect(value).toBeGreaterThan(3);
expect(value).toBeGreaterThanOrEqual(3.5);
expect(value).toBeLessThan(5);
expect(value).toBeLessThanOrEqual(4.5);
// toBe and toEqual are equivalent for numbers
expect(value).toBe(4);
expect(value).toEqual(4);
});Chris' Notes!
An example of this is ACNM. You're using React to write code for yourself. You're using Sanity to generate JSON objects that are then parsed and controlled by your application.
Estimating
You can't truly estimate a specific project until you are iterating on it, if it's large enough.
Consider the time range of the project, and use appropriate quote to estimate in (330 days is specific, 6 months is vague).
Breaking down a project can help you give a ballpark answer to how long something will take. It will also help you say "If you want to do Y instead, we could cut time in half"
Keeping track of your estimates is good — It well help teach your gut and intuition on how to give better estimates as a lead.
PERT (Program Evaluation Review Technique) is a system using Optimistic, most likely, and pessimistic estimates. A good way to start, allowing for a range with specific scenarios, vs just a large ball park guess with padding.
The only way to refine an estimate is to iterate. How long will this take? How long is a string? There are so many factors at play that are not the same - team productivity, features, unforeseen issues....
The schedule will iterate with the project. You won't get a clear answer until you are getting closer. Avoid hard dates off into the future.
Always say "I'll get back to you." Let things take how long they take.
This is for you too! Allow things to take as long as they take, don't feel rushed or pressured to produce. They take as long as they take.
The Basic Tools
At this point, the tools become conduits from he maker's brain to the finished product
Start with a basic set of generally applicable tools. Let need drive your acquisitions.
Many new programmers make the mistake of adopting a single power tool, such as... an IDE.
The Power of Plain Text
[There's a] difference between human readable and human understandable.
Easier Testing If you use plain text to create synthetic data to drive system tests, then it is a simple matter to add, update, or modify the test data without having to create any special tools to do so (Chris here – AKA, no mocking!)
Version Control
Invaluable tool. Serves as a time machine, collaborative tool, safe test space for concurrent development, and a back up of the project. (and your most important files!!)
Text Manipulation
(This book was done in plain text and manipulation is done in a number of ways)
Engineering Daybooks
We use them to take notes in meetings, to jot down what we're working on.... leave reminders where we put things, etc...
It acts as a kind of rubber duck... when you stop to write something down, your brain may switch gears, almost as if talking to someone...you may realize that what you'd just done is just plain wrong.
Pragmatic Paranoia
You can't trust the data out there or even your own application. You have to continually write safeguards for your code. Consider python - When writing a crawler, you have to assume you'll get bad information, or changes will occur. Assume the data you are trying to grab is very brittle.
True in react as well. Assume error
Design by Contract
In the human world, contracts help add predictability to our interactions. In the computer world, this is true too.
A contract has a precondition, a postcondition... and then there's Class Invariants
Precondition Handled by the caller, ensuring that good data and conditions are being passed to the routine.
The alternative? Bugs and errors. By setting up preconditions, you allow a safe post condition
Example:
if availability_regex:
unit_dict['date_available'] = standardize_date(availability_regex[0], output='str', default=True)Here we're only calling standardize_date if we have an availability_regex. Another python example
if chunk.getAttribute('name'):
name = chunk['name']
# Condensed into
name = chunk.getAttribute('name')
if not name:
rause AptError("No Name found")The Authors in Dead Programs Tell No Lies Actually say to crash when necessary. Get this straight - some of this advice is conflicting and situational. Sometimes you'll want to avoid running code from the outside as above. Sometimes you'll want to raise exceptions.
This is actually why people like TypeScript. There's an initial headache of getting everything set up. BUT once things are up and running, then you can rest assured that your code will work solidly. Communication will be clear, it incorporates documentation in that way.
Who's responsible?
Who is responsible for checking the precondition - the caller or the routine being called?
Here's an example in React. The routine is:
renderGraph = () => {
const {data, color, options, responsiveOptions, animationStyle, showPoints} = this.props;
let update = false;
if(this.graphElement.current && Array.isArray(data?.series)) {
// Render the graph
}
}and here is the caller
componentDidMount() {
this.renderGraph();
}Here the routine is responsible for validating the inputs. The issue here is that it will be called, but then there's no guarantee that it's doing what it set out to do. The contract is broken silently.
Perhaps this is just more acceptable in asynchronous code? We are accepting that "We may not have all the information we need on first call. So let's wait until the next call."
The issue is in clarity. I see it as I code. I see "Oh, it's called on mount, but it's called on updates too, so there's no telling if it's actually doing what it needs to do."
But again - we are dealing with heavily event driven programming, so the rules may not apply. For now, file this under "Good to know for Python."
Assertions You can partially emulate these checks with an assertive language such as TypeScript. However, it won't cover all of your bases. Consider DBC more of a design philosophy than a need for tooling.
DBC and Crashing Early
Crashing early, although painful, is a good thing. When you crash early, you can get to the root of the problem quicker.
The author's answered the thought I had: It's actually not as desirable in this philosophy for sqrt to return NaN, because it may only be ages later that you realize that the issue was with what you provided to sqrt, several functions later.
In conclusion - DBC is a proactive way of writing code so that you can find problems earlier. This can be implemented with test and documentation, or consider it a personal design philosophy.
The author's even make a case that DBC is different and preferable to TDD as it's more efficient and
Possible examples
Some libraries exist to use this in JS. Here's a babel plugin with pre and post conditions:
function withdraw (fromAccount, amount) {
pre: {
typeof amount === 'number';
amount > 0;
fromAccount.balance - amount > -fromAccount.overdraftLimit;
}
post: {
fromAccount.balance - amount > -fromAccount.overdraftLimit;
}
fromAccount.balance -= amount;
}and with Invariants:
function withdraw (fromAccount, amount) {
pre: {
typeof amount === 'number';
amount > 0;
}
invariant: {
fromAccount.balance - amount > -fromAccount.overdraftLimit;
}
fromAccount.balance -= amount;
}The current JS in your writing is to handle assertions manually:
function withdraw (fromAccount, amount) {
if(!fromAccount || !amount) return null;
. . .
}but this is only the precondition. Not to mention that this is part of the routine handling the issue.
Semantic invariants
These are a philosophical contract. A more broad principle that guides development. Example: Credit card transactions: "Err in favor of the consumer."
Dynamic contracts and agents
"I can't provide this, but if you give me this, then I might provide something else." High level stuff. Contracts negotiated by our programs. If you have xyz, I can return abc. Very interesting. Think of how GraphQL dynamically creates types. When it can dynamically look for what it needs out of given inputs, then it can solve negotiation issues.
Dead Programs Tell No Lies
Here we go!!
In some environments, it may be inappropriate simply to exit a running program. You may have claimed resources that may need released, error logs to handle, open transactions to clean, or to interact with other processes still.
AND YET the basic principle stays the same. Terminate the function within that system when an error occurs to prevent
Example in Python:
def collect_and_update(region, address, update = True):
db = Db().db
building = db.buildings.find_one({'region': region, 'address': address}, projection={'region': 1, 'name': 1, 'address': 1, 'state': 1, 'city': 1, 'collector': 1})
if not building:
raise AptError('Building not found: {}, {}'.format(address, region))
if not building.get('collector', {}).get('url'):
raise AptError('{} does not have Collector url'.format(address))
if not building.get('collector', {}).get('collectorType'):
raise AptError('{} does not have Collector type'.format(address))Here, the raise keyword stops the program.
Example in React:
const data = useMemo(() => {
if(averagePriceAggregate) {
const dataRes = {series: [], labels: []};
...
}
}No error is raised, but the code is encapsulated by an if statement to ensure it has the data it needs and will not run the script if it doesn't.
Who's Responsible for the precondition? Well, it actually depends on your environment.
Assertive Programming
Assert against the impossible. If you think it is impossible... It's probably possible. Validate often.
This is not to replace real error handling. If there is an issue, log and handle the error. Use assertions to pass on to the error logger. Terminate if necessary.
When asserting, do not create side effects. No (array.pop() == null) checks
How to Balance Resources
Finish what you start - close files. Careful of coupling.
Act Locally Keep scope close. Encapsulate. Smaller scope = better. Less coupling.
When Deallocating resources, do so in the opposite order of allocation.
When allocating the same set of resources in different places, always allocate in the same order
Be mindful of balancing long term. Log files are an often ignored memory hog over time.
Object oriented languages mirror this - there's a constructor and then destructor (you don't normally worry about the de-structure.)
In your case, event listeners - you want to add, then remove.
With exceptions, you can balance this neatly with a try...catch...finally block, or with context managers.
In python, the with...as keyword allows you to open a file, and then it gets closed after leaving the scope.
In JS, you have try, catch, finally. Though, be sure to allocate the resource before the try catch statement.
try {
allocateResource() // Goes wrong, the resource is not opened
} catch {
// handle error
} finally {
closeResource() // oops, it never got fully opened!
}Wrapper functions are helpful for managing and logging your resources. More advanced topic, but this can be a way to go about it in other languages.
Don't Outrun Your Headlights
In small and big ways, don't outrun your headlights. Avoid "Fortune Telling." Keep the feedback loop tight. Hit save after a few lines. Pass a test when you add code. Plan work a few hours or days ahead at most.
Notice that headlights also only go in one direction You may be thinking about the UI when you code, and then need to take a moment to see how it's balanced out the API or another resource.
Black Swans are unpredictable, and yet are guaranteed. No one talks about Motif or OpenLook anymore, because the browser-centric web quickly dominated the landscape.
Not to mention the current Federal Reserve raise in interest rates.
Oh hey! You are a REAL DEAL programmer as you create REAL UIs with the web!
Bend or Break
Decoupling
Train Wrecks
Be careful about how much knowledge one part of the code is expected to have about the other part of the code. Ideally, it's only a few levels deep.
For example, this...
customer
.orders()
.find(order_id)
.getTotals()
.applyDiscount()should more ideally be
customer
.findOrder(order_id)
.applyDiscountNot necessarily
customer.applyDiscountToOrder(order_id)Because it is ok for some global understanding. It is assumed that orders can be adjusted directly after being accessed from the customer.
The Law (rule of thumb) of Demeter simplified: Don't chain method calls.
Again, this is not a law, but a rule of thumb, as the above example demonstrates. Not chaining helps with decoupling.
Language level api's are the exception. It's perfectly find to chain:
orders
.filter(filterFunc)
.map(mapFunc)
.slice(0, 5)because you won't expect that to change anytime soon. It's about mitigating change.
Configuration
Use external configuration for your app (.env files). It's secure and keeps your app flexible. You can have different configs for different environments and deploys.
You can store it behind an API and DB for most flexible use. DB solution is best if it will be changed by the customer.
configuration-as-a-service Keeping it behind an API, again, keeps it flexible. An app shouldn't need to stop and rerun if something here changes (different API key, different port, credentials change). API-ify this aspect for maximum flexibility.
While You are Coding
Refactoring
It is natural for software to change. Software is not a building. It is akin to gardening, meant to be flexible and organic and needing regular nurturing.
Martin Fowler - An early writer on Refactoring
Definition: Refactoring is intentional and is a process that does not change the external behavior. No new features while refactoring!
When to Refactor
Often and in small doses. Best done when you see a pain point.
Also, right upon getting a feature to work. How can this be made more clear?
You shouldn't need a week to refactor.
Good tests are integral to refactoring. You are alerted immediately when you make an unintentional change thanks to tests.
Before the Project
The Requirements Pit
No one knows exactly what they want
In the early days, people only automated when they knew exactly what they wanted. This is not the case today. Software needs greater flexibility.
When given a requirement, your gut instinct should be to ask more clarifying questions. If you don't have any, build and ask "is this what you mean?"
Deliver facts on the situation and let the client make the decision.
Requirements are learned in a feedback loop
Consulting - ask why 5 times, and you'll get to the root. Yes, be annoying, it's ok.
Requirements vs policy: Requirements are a hard and fast thing (Must run under 500ms). Policy, however, is often configurable. For example: Color scheme, text, fonts, authorizations: These are configurable, and are therefor policy.
Requirements may shift when the user gets their hands on it. They may prefer different workflows. This is why short iterations work best.
A Better Way
Use index cards to gather requirements. Use a kanban board to show progress. Share the board with clients so they can see the effect of a "wafer thin mint" and they can help decide what to move along. Get them involved in the process - it's all feedback loops.
Maintain a glossary to align communication.
Excluding Internal Traffic in Analytics
It's not as clean as UA, sadly.
With Universal Analytics, Google's own Opt-Out plugin worked nicely. Unfortunately, it doesn't seem to be configured to work well with GA4.
Julius Fedorovicius has a fantastic article on what other options are available.
Google recommends filtering by IP address, but that's really not feasible with a company larger than 5 people!
The article walks through a great work around, exposing Google's traffic_type=internal parameter that it sets on events when there is an IP match.
The two options from there are to set this with either cookies or JavaScript. Both are imperfect in their own way, but all of these methods together end up being a useable solution.
Update: An alternate approach is to set the internal traffic from a custom event. If tag manager is already being used, it's likely there are custom events already set up for when an admin logs in. So you can trigger on admin login to set the internal traffic.
I can't recommend Julius Fedorovicius' article and site enough for all help on all the different growing pains from UA to GA4.
Here's hoping the ol' opt-out plugin gets an update sometime!
Debouncing in React (& JS Functions as Objects)
Debouncing take a bit of extra consideration in React. I had a few twists and turns this week working with them, so let's unpack how to handle them properly!
Debouncing Function in Vanilla JS
Lodash has a handy debounce method. Though, we could also just as simply write our own:
const debounce = (function, timeout) =>{
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => { function(args); }, timeout);
};
}In essence, we ant to call a function only after a given cool down period determined by timeout.
Lodash comes with some nice methods for canceling and flushing your calls. They also handles edge cases very nicely, so I would recommend their method over writing your own.
const wave = () => console.log('👋');
const waveButChill = debounce(wave, 1000);
window.addEventListener('click', logButChill);
// CLICK 50 TIMES IN ONE SECOND
👋With the above code, if I TURBO CLICKED 50 times per second, only one click event would fire after the 1 second cooldown period.
React
Let's set the stage. Say we have an input with internal state and we want to send an API call after we stop typing. Here's what we'll start with:
import React, {useEffect} from 'react';
import {debounce} from 'lodash.debounce';
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
expensiveDataQuery(value);
}, [value]);
const expensiveDataQuery = () => {
// get data
};
const handleChange = (e) => {
setValue(e.currentTarget.value);
};
return (
<input value={value} onChange={handleChange}/>
);
};
export default Input;Instead of fetching on submit, we're set to listen to each keystroke and send a new query each time. Even with a quick API call, that's not very efficient!
Naive Approach
The naive approach to this would be to create our debounce as we did above in within the component, like so:
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
}, [value]);
const fetchButChill = debounce(expensiveDataQuery, 1000);
. . .
}What you'll notice though, is that you'll still have a query sent for each keystroke.
The reason for this is that a new function is created on each component re-render. So our timeout method is never cleared out, but a new timeout method is created with each state update.
useCallback
You have a couple of options to mitigate this: useCallback, useRef, and useMemo. All of these are ways of keeping reference between component re-rendering.
I'm partial to useMemo, though the react docs state that useCallback is essentially the same as writing useMemo(() => fn, deps), so we'll go for the slightly cleaner approach!
Let's swap out our fetchButChill with useCallBack
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
}, [value]);
const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
. . .
};Just like useMemo, we're passing in an empty array to useCallback to let it know that this should only memoize on component mount.
Clearing after Unmount
An important edge case to consider is what happens if our debounce interval continues after the component has unmounted. To keep our app clean, we'll want a way to cancel the call!
This is why lodash is handy here. Our debounced function comes with method attached to the function!
WHAAAAAAT
A fun fact about JavaScript is that functions are objects under the hood, so you can store methods on functions. That's exactly what Lodash has done, and it's why we can do this:
fetchButChill(value);
fetchButChill.cancel();fetchButChill.cancel(); will do just that, it will cancel out debounced functions before being called.
Let's finish this up by adding this within a useEffect!
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
return () => fetchButChill.cancel();
}, [value]);
const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
. . .
};Migrating Tag Manager to Google Analytics 4
Code Set Up
If you're using Google Tag Manager, you are already set up in the code to be funneling data to GA4. Alternatively, you can walk through the GA4 Setup Assistant and get A Google Site Tag. It may look something like this:
<script async src="https://www.googletagmanager.com/gtag/js?id=G-24HREK6MCT"></script>
<script>
window.dataLayer = window.dataLayer || [];
...
gtag('config', 'UA-Something")
</script>Two things are happening - we're instantiating the google tag manager script, and we're creating a dataLayer to access any analytics information.
The dataLayer is good to note because we actually have access to this at anytime in our own code. We could push custom analytics events simply by adding an event to the dataLayer array, such as window.dataLayer.push('generate_lead')
Tag Manager
If you're already using Tag Manager, you'll want to 1. Add a new config for GA4 and 2. update any custom events, converting them to GA4 configured events.
It's advised to keep both GA4 and UA tags running simultaneously for at least a year to confirm there's enough time for a smooth migration. Fortunately for us, it's easy to copy custom event tags and move them to a separate folder within tag manager.
Custom Event Considerations
Dimensions & Metrics
GA4 has two means of measuring custom events: as Dimensions or as Metrics. The difference is essentially that a dimension is a string value, while a metric is numeric.
More is available in Google's Docs.
Variables in Custom Events
Just as you had a way of piping variables into Category, Action, Label, and Value fields in UA, you can add them to your custom events in GA4.
GA4 has a bit more flexibility by allowing you to set event parameters. You can have an array of parameters with a name-value pair. So on form submit, you could have a "budget" name and a "{{budget}}" value on an event. As we alluded to above, you can provide this by manually pushing an event through your own site's JavaScript.
Resources
Analytics Mania has a couple of very thorough articles on migrating to GA4 and testing your custome events in Tag Manager.
Sustaining Creativity
I've been thinking about this a lot. I went from making music in a clearly defined community to a much more amorphous one. When walking a more individualist road after being solely communally based for so long, what's the guiding purpose?
So the question on my mind has really been this: what's the motive behind continuing to work in a creative discipline?
Nothing here is really a prescription. It's mostly me figuring it out as I go. I write a lot of "You"s in this, but really I mean "me, Chris Padilla." If any of this is helpful to you, dear reader, by all means take what works! If you have perspectives on this, drop me a line.
So here we go! Three different categories and motives for making stuff:
Personal Creativity
I like making stuff! Just doing it lights me up. The most fun is when it's a blank canvas and I'm just following my own interest. It's just for me because I'm only taking in what sounds resonate with me, what themes come to mind, and what tools I have to make a thing.
I still share because it's fun to do so! It contributes to the pride of having made something that didn't exist before. A shared memento from the engagement with the spirit of creativity. But, any benefit other people get from it is merely a side effect of the process. It's not the purpose.
An interesting nuance that is starting to settle in as I do this more and more — there is no arrival point here. Creativity is an infinite game with no winners and losers, just by playing you are getting the reward and benefits then and there. This alone is a really juicy benefit to staying creative. But maybe it's not quite enough —
Gifts
Creativity for other people. Coming from a considerate place, a genuine interest in serving the person on the other side of it. Often this feels like a little quest or challenge, because I'm tasked to use the tools and skills I have to help, entertain, or bring beauty to the audience on the other end.
I'm pretty lucky in that I've pretty much always done creative work for others that has also lead to getting paid for it. Even my current work in software engineering I consider gifts. Money is part of it, but the empathetic nature of building for a specific group of people makes it feel like a gift.
$$$
Sometimes, ya gotta do what ya gotta do. In some ways, this is what separates professionals from amateurs. Teaching the student that's a bit of extra work, learning a new technology because it's popular in the market, or drawing commissions.
(Again, on a motivation level, I don't have much in my life that falls into this category. I'm very, VERY lucky to be working in a field that is interesting, and I have a pretty direct feeling of that work being of service — that work being a gift. BUT I've been in positions before where some of my work was more for those dollars.)
Actually, Game Director Masahiro Sakurai of Nintendo fame talks about this. A professional does what's tasked in front of them, even if it's not what you'd initially find interesting or fun. Even video game dev has it's chores!
This type of work is not inherently sell-out-y. You can still find the joy in the work and you can still find the purpose behind it. Shifting to a gift mindset here helps. Be wary of doing anything purely for this chunk of the venn diagram with no overlap.
A classic musician's rule of thumb for taking on a gig: "It has to have at least two of these three things: 1. Pay well 2. Have great music 3. Work with great people."
The Gist: Watch your mindset.
There's a balance between gift giving and creating just for you, I've been finding.
Things we make for our own pure expression and curiosity does not need to be weighed down by the expectation of other people loving it or of it selling wildly well. The gift is in following your own creative curiosity. And that's great!
If you're ONLY making things for yourself, and you're not finding ways to serve other people, then you'll be isolated and not fully fulfilled by what you're doing. Finding ways to give creatively is the natural balance for that.
A side note: Go for things that involve a few people, IRL. Nothing quite beats joining someone's group to make music in person, teaching someone how to do what you do, or making a physical gift for someone special!
Creating a Newsletter Form in React
Twitter is in a spot, so it's time to turn to good ol' RSS feeds and email for keeping up with your favorite artists, developers, and friends!
We built one for our game. This is another case in which building forms are more interesting than you'd expect.
Component Set Up
To get things started, I've already built an API similar to the one outlined here in my Analytics and CORS post
There are ultimately three states for this simple form: Pre-submitting, success, and failure.
Here's the state that accounts for all of that:
// Newsletter.js
import React from 'react';
import styled from 'styled-components';
import { useState } from 'react';
import { signUpForNewsletter } from '../lib/util';
const defaultMessage = 'Enter your email address:';
const successMessage = 'Email submitted! Thank you for signing up!';
const Newsletter = () => {
const [emailValue, setEmailValue] = useState('');
const [message, setMessage] = useState(defaultMessage);
const [emailSuccess, setEmailSuccess] = useState(false);
. . .
};We're holding the form value in our emailValue state. message is what is displayed above our input to either prompt them to fill the form, or inform them they succeeded. emailSuccess is simply state that will adjust styling for our success message later.
Rendering Our Component
Here's is that state in action in our render method:
// Newsletter.js
return (
<StyledNewsletter onSubmit={handleSubmit}>
<label
htmlFor="email"
style={{ color: emailSuccess ? 'green' : 'inherit' }}
>
{message}
</label>
<input
type="email"
name="email"
id="email"
value={emailValue}
onChange={(e) => setEmailValue(e.currentTarget.value)}
/>
<button type="submit">Sign Up</button>
</StyledNewsletter>
);Setting our input to email will give us some nice validation out of the box. I'm going against the current common practice by using inline styles here for simplicity.
Handling Submit
Let's take a look at what happens on submit:
// Newsletter.js
const handleSubmit = async (e) => {
e.preventDefault();
if (emailValue && isValidEmail(emailValue)) {
const newsletterRes = await signUpForNewsletter(emailValue);
if (newsletterRes) {
setEmailValue('');
setEmailSuccess(true);
setMessage(successMessage);
} else {
window.alert('Oops! Something went wrong!');
}
} else {
window.alert('Please provide a valid email');
}
};The html form, even when we prevent the default submit action, actually still checks the email input against it's built in validation method. A great plus! I have a very simple isValidEmail method in place just to double check.
Once we've verified everything looks with our inputs, on we go to sending our fetch request.
// util.js
export const signUpForNewsletter = (email) => {
const data = { email };
if (!email) console.error('No email provided', email);
return fetch('https://coolsite.app/api/email', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
})
.then((response) => response.json())
.then((data) => {
console.log('Success:', data);
return true;
})
.catch((error) => {
console.error('Error:', error);
return false;
});
};I'm including return statements and a handler based on those return statements later with if (newsletterRes) ... in our component. If it's unsuccessful, returning false will go into our very simple window.alert error message. Else, we continue on to updating the state to render a success message!
Wrap Up
That covers all three states! Inputing, error, and success. This, in my mind, is the bare bones of getting an email form setup! Yet, there's already a lot of interesting wiring that goes into it.
From a design standpoint, a lot of next steps can be taken to build on top of this. From here, you can take a look at the API and handle an automated confirmation message, you can include an unsubscribe flow, and you can include a "name" field to personalize the email.
Even on the front end, a much more robust styling for the form can be put in place.
Maybe more follow up in the future. But for now, a nice sketch to get things started!
Here's the full component in action:
// Newsletter.js
import React from 'react';
import styled from 'styled-components';
import { useState } from 'react';
import { signUpForNewsletter } from '../lib/util';
const defaultMessage = 'Enter your email address:';
const successMessage = 'Email submitted! Thank you for signing up!';
const Newsletter = () => {
const [emailValue, setEmailValue] = useState('');
const [message, setMessage] = useState(defaultMessage);
const [emailSuccess, setEmailSuccess] = useState(false);
function isValidEmail(email) {
return /\S+@\S+\.\S+/.test(email);
}
const handleSubmit = async (e) => {
e.preventDefault();
if (emailValue && isValidEmail(emailValue)) {
const newsletterRes = await signUpForNewsletter(emailValue);
if (newsletterRes) {
setEmailValue('');
setEmailSuccess(true);
setMessage(successMessage);
} else {
window.alert('Oops! Something went wrong!');
}
} else {
window.alert('Please provide a valid email');
}
};
return (
<StyledNewsletter onSubmit={handleSubmit}>
<label
htmlFor="email"
style={{ color: emailSuccess ? 'green' : 'inherit' }}
>
{message}
</label>
<input
type="email"
name="email"
id="email"
value={emailValue}
onChange={(e) => setEmailValue(e.currentTarget.value)}
/>
<button type="submit">Sign Up</button>
</StyledNewsletter>
);
};
export default Newsletter;
const StyledNewsletter = styled.form`
display: flex;
flex-direction: column;
max-width: 400px;
font-family: inherit;
font-size: inherit;
padding: 1rem;
text-align: center;
align-items: center;
margin: 0 auto;
label {
margin: 1rem 0;
}
#email {
width: 80%;
padding: 0.5rem;
/* border: 1px solid #75ddc6;
outline: 3px solid #75ddc6; */
font-family: inherit;
font-size: inherit;
}
button[type='submit'] {
position: relative;
border-radius: 15px;
height: 60px;
display: flex;
-webkit-box-align: center;
align-items: center;
-webkit-box-pack: center;
justify-content: center;
padding: 2rem;
font-weight: bold;
font-size: 1.3em;
margin-top: 1rem;
background-color: var(--cream);
color: var(--brown-black);
border: 3px solid var(--brown-black);
transition: transform 0.2s ease;
text-transform: uppercase;
}
button:hover {
color: #34b3a5;
background-color: var(--cream);
border: 3px solid #34b3a5;
cursor: pointer;
}
`;Building a Proxy with AWS Lambda Functions and CORS
For those times you just need a sip of backend, Lambda functions serve as a great proxy.
For my situation, I needed a way for a client to submit a form to an endpoint, use a proxy to access an API key through environment variables, and then submit to the appropriate API. The proxy is still holding onto sensitive data, so in lieu of storing an API key on the client (no good!), I'm using CORS to keep the endpoint secure.
Handling Pre-Flight Requests:
This article by Serverless is a nice starting place. Here are the key moments for setting up cors:
# serverless.yml
service: products-service
provider:
name: aws
runtime: nodejs6.10
functions:
getProduct:
handler: handler.getProduct
events:
- http:
path: product/{id}
method: get
cors: true # <-- CORS!
createProduct:
handler: handler.createProduct
events:
- http:
path: product
method: post
cors: true # <-- CORS!The key config, cors: true is a good start, but is the equivalent of setting our header to 'Access-Control-Allow-Origin': '*'. Essentially, this opens our endpoint up to any origin. So we'll need to find a way to secure this to only a couple of urls.
Serverless here recommends handling multiple origins in the request itself:
// handler.js
const ALLOWED_ORIGINS = [
'https://myfirstorigin.com',
'https://mysecondorigin.com'
];
module.exports.getProduct = (event, context, callback) => {
const origin = event.headers.origin;
let headers;
if (ALLOWED_ORIGINS.includes(origin) {
headers: {
'Access-Control-Allow-Origin': origin,
'Access-Control-Allow-Credentials': true,
},
} else {
headers: {
'Access-Control-Allow-Origin': '*',
},
}
. . .
}This alone would work fine for simple GET and POST requests, however, more complex requests will send a Preflight OPTIONS request. I am sending a POST request, but it would have to be an html form submission to qualify as "simple." Since I'm sending JSON, it's considered complex and a preflight request is sent.
A little more looking in serverless docs shows us how we can approve multiple origins for our preflight requests:
# serverless.yml
cors:
origins:
- http://www.example.com
- http://example2.comServer response with Multiple Origins
When allowing multiple origins, the response needs to return a single origin in the header, matching the request origin. If we send a comma delineated string with all our origins, the response will not be accepted.
In our server code above, we did handled this with the logic below:
const origin = event.headers.origin;
let headers;
if (ALLOWED_ORIGINS.includes(origin) {
headers: {
'Access-Control-Allow-Origin': origin,
'Access-Control-Allow-Credentials': true,
},
}We grab the origin from our request headers, match it with our approved list, and then send it back in the response headers.
Lambda & Lambda Proxy
To have access to our request headers, we need to ensure we are using the correct integration.
Lambda Proxy integration is the default with serverless and the one that will include the headers.
So why am I pointing this out?
Some Lambdas you work with may include integration: lambda in their config file:
functions:
create:
handler: posts.create
events:
- http:
path: posts/create
method: post
integration: lambdaThese are set to launch the function as Lambda integrations.
The general idea is that Lambda Proxy integrations are easier to set up while Lambda integrations offer a bit more control. The only extra bit of work required for Lambda proxy is handling your own status codes in the response message, as we did above. Lambda integrations may be more suitable in situations where you need to modify a request before sent to the lambda or a response after. (A really nice overview of the difference is available in this article.)
So, if you're setting up your own lambda, no need to do anything different to access the headers. If working with an already established set of APIs, keep an eye out for integration: lambda. Accessing headers will take some extra considerations in that case.
Walt Stanchfield & Performing with No Audience
Switching from a performance art to a creating medium has been weird.
As a musician and teacher, the feedback loop was pretty tight. Performing on stage and playing in groups, there's a real magic to having other people in the room responding and reacting in real time and real space.
Even with teaching! Going into a lesson, students would improve noticeably on the spot, or laugh at my bad dad jokes right then and there.
Now I work in software. Don't get me wrong, I get great feedback! Though, it's a difference between publishing and performing.
Creatively instead of playing on stage, I write songs, draw on the couch, and largely play for a digital audience. Much of my creative work is published, not performed.
So I've been thinking about that a lot.
Walt Stanchfield
The late Walt Stanchfield, former Disney animator and teacher, knows what I'm talking about. The guy, on top of being a highly expressive teacher and artist, played concert piano, wrote poems, and was an enthusiastic tennis player.
Here he is talking about animation, though it's easy to see how he could be talking about any digital creative work:
Animation has a unique requirement in that its rewards are vaguely rewarding and at the same time frustrating. We are performers but our audience is hidden from us. We are actors but there is no applause. We are artists but our works are not framed and hung on walls for friends to see. We are sensitive people whose sensibility is judged across the world in dingy theaters by a sometimes popcorn eating audience. Yet we are called upon day by day to delve deep into our psyche and come up with fresh creative bits of entertaining fare. That requires a special kind of discipline and devotion, and enthusiasm. Our inner dialogue must be amply peppered with encouraging argument. We sometimes have to invent or create an audience in our minds to draw for.
Walt knows the curious position because he's been on both sides of this. Here he is talking about performing for a live audience:
I used to sing in operettas, concerts, etc., so I know what real applause is. It is heavenly. A living audience draws something extra out of the performer. A stage director once said to the cast of a play on the opening night, “You’ve had good equipment to work with: a theatre with everything it takes to put on a show. But you have been handicapped—one essential thing has been denied you. Tonight there’s an audience out there; now you have everything you need.”
So is there a solution to dealing with that missing piece? Is it just comparing apples and oranges? Walt recommends drumming up the empathy and imagination yourself, ultimately.
Well, we do have an awaiting audience out there. We’ll be denied the applause but at least there is a potential audience to perform for; one to keep in mind constantly as we day by day shape up our year dress rehearsal. Even as we struggle with the myriad difficulties of finalizing a picture—what is the phrase, “getting it in the can,” we can perform each act for that invisible or mystical audience. We can’t see our audience but it is real and it is something to work for.
So yes, a little bit of imagination.
He mentions it earlier, but devotion and enthusiasm has been the real key for me. I don't think I'd say I necessarily played music for the applause. The practice itself is what's energizing. I'm grateful that all of my disciplines have pretty great feedback loops. They're so physical, tactil, and expressive that the work is reward enough.
Sharing is really just a nice bonus, an artifact of the time well spent chasing a creative thread.
The whole essay is "A Bit of Introspection" from Gesture Drawing For Animation by Walt Stanchfield, handout made freely available, and published into a couple of nice books as well.
Iwata on What's Worth Doing
When it comes to answering the question "What's worth doing?", the internet can muddy it up a bit.
Plenty of good to the internet: Shared information, connecting with far flung people, and finding community.
And, it's also a utility that can deceive us into feeling infinite.
I was surprised to see Nintendo's former president Satoru Iwata wrestle with this in an interview he gave for Hobo Nikkan Itoi Shumbun that was published in the book "Ask Iwata."
"The internet also has a way of broadening your motivations. In the past, it was possible to live without knowing there were people out there who we might be able to help, but today, we're able to see more situations where we might be of service. But this doesn't mean we've shed the limitations on the time at our disposal.
...as a result, it's become more difficult than ever to determine how to spend the hours of the day without regret."
Wholly relatable. Very warm to see Iwata put this in terms of serving people. For creative folk, this could be anything from projects to pursue, audiences to reach, and relationships to develop. There's, I'm sure, an interesting intersection with another change in history — the ability to reproduce art.
"It's more about deciding where to direct your limited supply of time and energy. On a deeper level, I think this is about doing what you were born to do."
Less is the answer, and considering your unique position is what takes the place of overwhelming choice. What you were born to do can be a heavy question unto itself, but thinking of it as what you're in the unique position to do helps.
I'll paraphrase Miyazaki here: "I focus on only what's a few meters from me. Even more important than my films, that entertain countless children across the world, is making at least three children I see in a given day smile." Focusing on the physical space and your real, irl relationships, is likely to guide you towards what's worth doing.
The Gist on Authentication
Leaving notes here from a bit of a research session on the nuts-and-bolts of authentication.
There are cases where packages or frameworks handle this sort of thing. And just like anything with tech, knowing what's going on under the hood can help with when you need to consider custom solutions.
Sessions
The classic way of handling authentication. This approach is popular with server-rendered sites and apps.
Here, A user logs in with a username and password, the server cross references them in the DB, and handles the response. On success, a session is created, and a cookie is sent with a session id.
The "state" of sessions are stored in a cache or on the DB.
Session Cookies are the typical vehicles for this approach. They're stored on the client and automatically sent with any request to the appropriate server.
Pros
For this approach, it's nice that it's a passive process. Very easy to implement on the client. When state is stored in a cache of who's logged in, you have a more control if you need to remotely log a user out. Though, you have less control over the cookie that's stored in the client.
Cons
The lookup to your DB or cache can be timely here. You take a hit in performance on your requests.
Cookies are also more susceptible to Cross-Site Request Forgery (XSRF).
JWT's
Two points of distinction here: When talking about a session here, we're talking about that stored on the server, not session storage in the client.
Cookies hypothetically could be used to store a limited amount of data, but for JWT's typically need another method, since cookies have a small size limit.
Well, what are JWT's? Jason Web Tokens are a popular alternative to sessions and cookie based authentication.
On successful login, a JWT is returned with the response. It's then up to the client to store it for future requests, working in the same way as sessions here.
The major difference, though, is that the token is verified on the server through an algorithm, not by DB lookup of a particular ID. There's a major prop of JWT's! It's a stateless way of handling authentication.
Options for storing this on the client include local storage, indexedDB, and some would say, depending on the size of your token, cookies.
Pros
As mentioned, it's stateless. No need to maintain sessions in your cache or on your DB.
More user-related information can be stored with the token. Details on authorization level is common ("admin" vs "user" permissions.)
This approach is also flexible across platforms. You can use JWT's with mobile applications or, say, a smart TV application.
Cons
Because this approach is stateless, unfortunately you have limited control in logging out individual users remotely. It would require changing your entire algorithm, logging all of your users out.
Depending on how you store the token, there are security concerns here, too. It's best to avoid local storage, in particular, as you are open to XSRF - Cross site request forgery. If you accept custom inputs from users, beware also of XSS - Cross Site Scripting, where malicious code could be ran on your site.
Who Wins?
Depending on your situation, you may just need the ease of setup provided by session storage. For an API spanning multiple devices, JWT's may seem appealing. There is also the option to blend the approaches: Using JWT's while also storing session logic in a cache or DB.
Some handy libraries for implementing authentication includes Passport.js and auth0. For integrated authentication with Google, Facebook, etc., there's also OAuth2.0. A tangled conversation on its own! And, admittedly, one that's best implemented alongside a custom authentication feature, rather than as the only form of authentication.
An Overview of Developing Slack Shortcuts
For simple actions, sometimes you don't need a full on web form to accomplish something. An integration can do the trick. Slack makes it pretty easy to turn what could be a simple webform into an easy-to-use shortcut.
It's a bit of a dance to accomplish this, so this will be more of an overview than an in depth look at the code.
As an example, let's walk through how I'd create a Suggestion Box Shortcut.
Slack API
The first stop in setting any application up with Slack is at api.slack.com. Here we need to:
You'll create a callback ID that we'll save for later. Our's might be "suggestionbox"
Developing your API with Bolt
It's up to you how you do this! All slack needs is an endpoint to send a POST request. A dedicated server or serverless function works great here.
Here are the dance steps:
There are multiple steps because we'll receive multiple communications:
Shortcut opens => Our API fires up and sends the modal "view" for the shortcut.
User marks something on the form => Our API listens to the action and potentially update the view.
User submits the form => Our API handles the request and logs a success / fail message.
Bolt is used here to massively simplify this process. Without Bolt, the raw slack API uses http headers to manage the different interactions. With Bolt, it's all wrapped up neatly in an intuitive API.
Blocks
The UI components for slack are called blocks. There is a handy UI for creating forms and receiving the appropriate JSON in their documentation. Several great inputs are included, like multi select, drop down, date picker, and several other basic inputs that are analogous to their web counterparts.
Redux Growing Pains and React Query
AC: New Murder's announcement has been par for the course of a major release. Lots of good feedback and excitement, and some big bugs that can only be exposed out in the open.
The biggest one was a bit of a doozy. It's around how we're fetching data. The short version of an already short overview is this:
Naturally, something went wrong in between.
Querying Sanity
Sanity uses a GraphQL-esque querying language, GROQ, for data fetching. A request looks something like this:
`*[_type == 'animalImage']{
name,
"images": images[]{
emotion->{emotion},
"spriteUrl": sprite.asset->url
}
}`Similar to GraphQL, you can query specifically what you need in one request. For our purposes, we wanted to store data in different hierarchies, so a mega-long query wasn't ideal. Instead, we have several small queries by document type like the animalImage query above.
The Issue
On app load, roughly 5 requests are sent to Sanity. If it's a certain page with dialogue, 5 additional requests will be sent.
The problem: Not every request returned correctly.
This started happening with our beta testers. Unfortunately, there's not a ton of data to go off of. From what we could tell, everyone had stable internet connections, used modern browsers, and weren't using any blocking plugins.
My theory is that some requests may not be fulfilled due to the high volume of requests at once. I doubt it's because Sanity couldn't handle our piddly 10 requests. More likely, there could be a request limit. Here, I'm still surprised it would be as low as 10 within a certain timeframe.
Whatever the cause, we had an issue where API requests were failing, and we did not have a great way of handling it.
Contemplating Handling Errors
This project started 2 years ago when the trend for using Redux for all data storing was still pretty high. Things were starting to shift away as the project developed, but our architecture was already set.
There is potentially a Redux solution. Take a look at this Reducer:
function inventoryReducer(state = initialState, action) {
const { type, payload } = action;
switch (type) {
case 'GET_INVENTORY_ITEMS/fulfilled':
return { ...state, items: payload };
...The "/fulfilled" portion does imply that we do log actions of different states. We could handle the case if it returns a failure, or even write code if a "/pending" request hasn't returned after a certain amount of time. Maybe even, SAY, fetch three times, then error out.
But, after doing all that, I would have essentially written React Query.
Incorporating React Query
It was time. A major refactor needed to take place.
So, at the start, the app is using Redux to fetch and store API data.
React Query can do both. But, rewiring the entire app would have been time consuming.
So, at the risk of some redundancy, I've refactored the application to fetch data with React Query and then also store the data in Redux. I get to keep all the redux boilerplate and piping, and we get a sturdier data fetching process. Huzzah!
Glueing React Query and Redux Together with Hooks
To make all of this happen, we need:
A tall order! We have to do this for 10 separate requests, after all.
After creating my actions, migrating GROQ into query methods, we need to make the glue.
I used a couple of hooks to make this happen.
import React, { useEffect } from 'react';
import { useQuery } from 'react-query';
import { useDispatch } from 'react-redux';
import { toast } from 'react-toastify';
export default function useQueryWithSaveToRedux(name, query, reduxAction) {
const dispatch = useDispatch();
const handleSanityFetchEffect = (data, error, loading, reduxAction) => {
if (error) {
throw new Error('Woops! Did not receive data from inventory', {
data,
error,
loading,
reduxAction,
});
}
if (!loading && !data) {
// handle missing data
toast(
"🚨 Hey! Something didn't load right. You might want to refresh the page!"
);
}
if (data) {
dispatch(reduxAction(data));
}
};
const { data, isLoading, error } = useQuery(name, query);
useEffect(() => {
handleSanityFetchEffect(data, error, isLoading, reduxAction);
}, [data, isLoading, error]);
return { data, isLoading, error };
}useQueryWithSaveToRedux takes in the query and redux action. We write out our useQuery hook, and as the data, isLoading, and error results are updated, we pass it to our handler to save the data. If something goes awry, we have a couple of ways of notifying the user.
These are then called within another hook - useFetchAppLevelData.
export default function useFetchAppLevelData() {
const snotesQuery = useQueryWithSaveToRedux('sNotes', getSNotes, saveSNotes);
const picturesQuery = useQueryWithSaveToRedux(
'pictures',
getPictures,
savePictures
);
const spritesQuery = useQueryWithSaveToRedux(
'sprites',
getSprites,
saveSprites
...
return {
snotesQuery,
picturesQuery,
spritesQuery,
...
};
}useFetchAppLevelData is simply bringing all these hooks together so that I only need to call one hook in my component. It's mostly here to keep things tidy!
import useFetchAppLevelData from './hooks/useFetchAppLevelData';
function App() {
const location = useLocation();
const dispatch = useDispatch();
const fetchAppLevelDataRes = useFetchAppLevelData();
...
}A big task, but a full refactor complete!
Writing Music
I had a surprisingly hard time starting up the practice of writing music. Lots of false starts were involved, a ton of back and forth on if I even really enjoyed doing it, and the classic moments of cringing at some of my first tunes.
In a lot of ways, music school _really__ helped me out with the skills and vocabulary needed to make songs.
But then, the unspoken emphasis on theory-driven music and "correctness" in music was a really difficult funk to shake loose.
So, this is advice for me-from-a-year-ago. Or, maybe it's for you! These are some things I've picked up wrestling in the mud. It's from the perspective of a performing musician switching gears to writing. Maybe it will help if that's you!
Playful Mindset
The meatiest part of getting into it is right here. It's gotta be fun!
Gradually over the course of going through school and mastering an instrument, I assumed that what was meaningful was hard. I was fortunate to have wildly supportive instructors. Never did my music school experience come close to the movie Whiplash, is what I'm saying!
But, still, systematically it's a competitive environment.
On the other side of school, creative practices have to be done with much more levity.
It helps that what I write is pretty silly! Take time to do things badly: Write the worst song ever on purpose. Accidentally write avant garde music. Write music to a silly prompt. Anything to get it moving!
Honestly, it's a lifestyle thing. Making time for your play: Doing things just for the fun of it, feeds into this as well.
There's a balance between finishing songs and always moving to what's most exciting. A balance between keeping a routine and letting enthusiasm guide you. That interplay is what keeps it exciting! Lean towards curiosity and interest as often as you can!
Being a Connector
Sometimes the ideas just come. Seemingly out of nowhere, after assimilating new techniques, sounds, and theory, it all just clicks!
These days are a rush when they happen! And they are few and far between.
In the meantime, I think taking the approach of a connector is really helpful.
Say you want to write a song as if Beethoven wrote Lo-Fi hip hop chill beats to study to.
You have two sounds to work with: Orchestral brilliance and a gentle beat.
Like a DJ, your job is to mix them so that they work together. DJ's only have tempo and keys to adjust. You, on the other hand, probably have a lot more tools at your disposal (Swapping chords, rhythm, tempo, new melody, instrumental texture, mood, etc.)
This is one of my favorite parts of the practice because it's SO JUICY! You get to break open and learn a little bit about what makes a certain artist, song, or style sound the way it sounds. There's some transcribing involved that's helpful here. Often times, the pieces that need connecting need some glue. Maybe even original material! So you are in fact writing something new, even if it's just a transition or a different bass line. At the end of all that learning, you have something new that's never existed before! Something complete that gave you lots of cool little tools for future-you writing future-music.
Use References
Expanding on the above point a bit, you shouldn't have any guilt around using refference.
Steal like an Artist! You could read a whole little book on it. I'll tell you now: Everyone is stealing something. Even if you're Jacob Collier, you're borrowing from genres, artists, and experimental theory ideas. We're all just riffing on the major scale, at the end of the day!!
Letting go of the weight of trying to be original helped me loosen up. Probably you're doing something original on accident, even if you're not trying. We all have such a unique collection of microscopic influences that have bent our ears and minds, it's bound to come through in what you make.
Transcribe
The best thing my general music classes gave me was just enough theory and ear training to transcribe. I also got a lot of weird hang ups about it, so I avoided it for a little while.
Some myth busting on using the tool of transcription:
Momentum is More Important Than Accuracy
Sometimes recordings are muddy, chords are dense, or a sound just isn't sitting in the ear. Move on! Find something that kind of matches the musical/emotional intent, and get back to writing. It would be a shame to let go of learning all the other juicy things about form, harmony, melody, and instrumentation just because it's hard to hear exactly what extensions were being used in a passage.
Know Enough Music Theory for the Major Tropes
In jazz, you have to know about the ii V I. In classical, the dominant to tonic. Knowing enough of the reoccurring themes in a genre makes transcribing easier, and you get to focus on the building blocks around it instead of dissecting a technique you probably could have found in a blog article somewhere.
Actually, blogs are great places to start with learning these, if it's a ubiquitous form like jazz.
Transcribing is a Learnable Skill
It's like anything. The more of it you do, the easier it is. Being reasonable with it at the start helps keep you moving. For example, maybe just start with the form of a song and then try to write something with the same form. Or focus on major harmonic points instead of every subtle chord shift. There's no test at the end of a transcription. So long as you're picking up a new technique and immersing yourself in a sound, you're learning what you need to from it.
Releasing Music and Overcoming Inertia
I have an arbitrary pacing for when I release music. It's broad enough where if I miss a day, it's no big deal, but frequent enough where it keeps my spirit magnetically pulled to always asking "What's next?"
I've tried a few out: "Write something everyday" was impossible. "Record one album this year" meant it was never going to happen. But having a regular interval somewhere in between those two kept me going.
Having to make it public, also, helps a lot with the accountability, even if no one is actively policing you on your schedule.
Follow Your Energy Through the Day
Classic productivity prescription. It clicked for me when I heard Dilbert's Scott Adams talk about it in his sort-of-autobiography. For him, writing happens in the morning, and rote-drawing happens in the evening.
Translating to writing: Actual melody/harmony production happens in the morning, edits and tightening up the quality happens in the evening. Or, most of the time in my case, I took the evenings to practice an instrument like guitar or piano. It doesn't take design-type thinking to practice a scale or play an exercise.
Keep a Collection of What You Like
Likes on Spotify, bookmarks in your web browser, whatever! I personally keep a plain text file called FavoriteMusic.md where I copy in links, song titles, and notes on what I like about a song.
I have a list for album ideas. Some may never happen. But on the days where there's simply a blank canvas, both of these lists come in handy.
Make It Real
This might just be helpful to me, personally. If it's not under my fingers, it doesn't always feel very real. At the very least, it becomes too cerebral if it isn't.
Sometimes I find an idea while noodling on guitar. Or from playing sax. My favorite now is piano. Nothing beats it when it comes to visualizing harmony and getting used to thinking polyphonically.
Largely, keeping a part of the process tactile has helped. The day I got an electronic keyboard hooked up to my laptop as a midi input, the game changed.
Be In Motion
Any creative thing — music, art, blogs — is cool because, in my mind, it's a still image capture of something in motion. Like those photos with Long Exposure effects and Light Painting.
In other words - Don't worry about sitting down and not knowing what's going to come out. That's the fun part!! A dash of mystery and a pinch of romance on a day-to-day basis!
You learn from starting. Get something on the page. Then mold it. I think very few folks know exactly how something will go before they sit down to write it. It's a process. In fact, the process is what's so rewarding anyhow! It's a journey of discovery, making something. That's the point of it all in the end. Not to have made, but to be making.
Fonts and CLS
Fonts are a tricky space when accounting for CLS. They have the potential to not ping the CLS score too harshly. Though, if multiple element sizings are based on a web font loading, then it can add up. A nav bar plus a hero header plus breadcrumbs plus subtitle plus author name, it can all contribute to a larger score.
Current solutions are primarily hack-y. There are a few worth experimenting with, and a few coming down the pipe
Pure CSS Solution
The leading idea offered up is to use font-display: optional. This will not load web fonts if they don't load in time. A great SEO solution, but not an ideal design solution.
CSS Font API
the CSS Font Loading API can be used to determine when the font has loaded, and then render the content to the page. document.fonts.onloadingdone will accept a call back where we can switch styles from hidden to display: block. In React, it could look something like this:
const TextComponent = () => {
const [fontLoaded, setFontLoaded] = useState(false)
useEffect(() => {
document.fonts.onloadingdone(() => setFontLoaded(true))
})
// fallback, render content after certain time elapsed
useEffect(() => {
setTimeout(() => setFontLoaded(true), 1000)
})
...
return (
<StyledTextComponent $fontLoaded={fontLoaded}>
...
</StyledTextComponent>
)
}
const StyledTextComponent = styled.ul`
display: ${props => props.$fontLoaded ? 'block' : 'none'};
...
`;This is not an ideal solution for main page content. It wouldn't be ideal to have the content missing when SEO bots crawl your site for content. This would work great for asides, however.
Font Optimization
This article shares some interesting ideas on optimizing fonts so that they load before CLS is accounted for. Though, for some use cases these are heavy handed solutions. They include:
Font Descriptions
In the future 🪐 we'll see font descriptions coming to CSS. A great overview is here on Barry Pollard's Smashing Magazine Article. The gist is that we'll have more control over adjusting the size of fonts as they're swapped out to mitigate the shifting that comes from a differently sized font.
It's almost there, but will still take some time to fully bake.
Aggregation in MongoDB
Earlier I wrote on getting a quick bare-bones analytics feature running for a project.
Now that we're recording data, I want to take a look at actually analyzing what we save.
Knowing just enough about database aggregation goes a long way in providing insight into the data we're collecting! I'll dive into what things look like on the Mongodb side of things:
Data Model
My use case is pretty simple. All I need to know is how many users have played a game since it's release.
So, our data model is similarly simple. Here's what a log for starting the game looks like:
{
"_id": {
"$oid": "633eceff9b5e4de"
},
"date": {
"$date": {
"$numberLong": "1665060607623"
}
},
"type": "play"
}type here is the type of event that we're logging. "Play" marks the start of the game, "complete" when they finish, and a few in between.
Aggregation
When fetching the data, I want the Database to do the heavy lifting of sorting the documents and counting how many have played the game, finished it, and all the points in between. Mondodb's aggregation language makes this a really easy task:
const aggregation = [
{
// Find documents after a certain date
$match: {
date: {
$gte: new Date('Fri, 30 Sep 2022 01:15:01 GMT'),
},
},
},
// Count and group by type
{
$group: {
_id: '$type',
count: {
$sum: 1,
},
},
},
];Here's what that returns (with fake data):
{
"play": 100000000, // Wishful thinking!
"start act 3": 136455,
"complete": 8535,
"start trial": 1364363
}The $group operator is pretty flexible. With a little more elbow grease, you could also aggregate counts from month to month, and display a very slick line chart. 📈
Coming back to the point from my last article, since we're measuring a game, the interaction is more important to us. This data is more reliable and so closely integrated with our application since it relies on actions that, more than likely, bots and crawlers won't engage with as often. It's still likely not a perfect representation, but it provides enough data to gauge impact and see where the bottlenecks are in the game flow.

