Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more
You can follow by Newsletter or RSS! (What's RSS?) Full archive here.
Generating Back Links For Optimal Digital Gardening
I came across Maggie Appleton's tremendous post "A Brief History & Ethos of the Digital Garden"!
I've heard of the publishing philosophy in passing and found the term itself to resonate. A counter to high-production, corporate leaning purposes for owning a domain name, a digital garden assumes work in progress, a broad spectrum of topics and interests, and an ever evolving space online where ideas and things of beauty can blossom. Lovely!
There are a few patterns that show up with folks that have taken on the spirit of digital gardening. One that caught my eye was "Topography over Timelines."
Gardens are organized around contextual relationships and associative links; the concepts and themes within each note determine how it's connected to others.
This runs counter to the time-based structure of traditional blogs: posts presented in reverse chronological order based on publication date.
Gardens don't consider publication dates the most important detail of a piece of writing. Dates might be included on posts, but they aren't the structural basis of how you navigate around the garden. Posts are connected to other by posts through related themes, topics, and shared context.
One of the best ways to do this is through Bi-Directional Links – links that make both the destination page and the source page visible to the reader. This makes it easy to move between related content.
Because garden notes are densely linked, a garden explorer can enter at any location and follow any trail they link through the content, rather than being dumped into a "most recent” feed.
Love it! My favorite discoveries are with sites that link well. It's a blast hopping around, continuing the conversation from page to page. Wikis are the prime example of this. Tough, some bloggers like Austin Kleon also do this particularly well.
So! Why only be bound by linking in one chronological direction? I took to the idea and whipped up a script to employ this myself!
Developing Bi-Directional Linking
This site uses markdown for posts. So doing this job is largely about text parsing. Much of the logic, in fact, is similar to how I parse my posts to display an art grid.
I'll start by writing the function to actually get the url value from my links. The regex is looking for the value with parenthesis in the typical markdown shorthand for links: ![alt text](link)
// api.js
export const getInternalLinksFromMarkdown = (md) => {
const regex =
/(?:__|[*#])|\[(.*?)\]\(\/(.*?)\)/g;
return Array.from(md.matchAll(regex)).map((res) => {
if (res && res.length > 1) {
return res[2];
}
});
};
The value of index 2 of the array will give me the capture group I've targeted because that's how it's done in Node!
From here, I'll then pass in my posts and systematically generate an object that grabs both the targeted url as well as the current post url.
// api.js
export function getAllPostRefs(
fields = ['content', 'slug', 'title'],
options = {}
) {
const slugs = getPostSlugs();
let posts = slugs
.map((slug) => getPostBySlug(slug, fields))
// Filter false values (.DS_STORE)
.filter((post) => post)
// sort posts by date in descending order
.sort((post1, post2) => (post1.date > post2.date ? -1 : 1));
const links = {}
posts.forEach((post) => {
const postLinks = getInternalLinksFromMarkdown(post.content);
postLinks.forEach((src) => {
if (src && !src.includes('/')) {
if (!links[src]) {
links[src] = [];
}
if (!links[src].find(entry => entry.slug === post.slug))
links[src].push({
slug: post.slug,
title: post.title,
})
}
});
})
return links;
}
A Set data structure would be ideal for keeping duplicates out of the list, but we'll be converting this to JSON, and I'd rather avoid the hassle of bringing in a library for the conversion.
Finally, I'll call this function and save the results to a JSON file for refference.
biDirectionalLink.js
import { getAllPostRefs } from "./api"
const FileSystem = require("fs");
export const getRefs = () => {
const links = getAllPostRefs();
FileSystem.writeFile('_cache/backlinks.json', JSON.stringify(links), (error) => {
if (error) throw error;
});
}
Here's an snippet of what it generates:
{
"30": [
{
"slug": "2022",
"title": "2022"
},
{
"slug": "iwataonpeople",
"title": "Iwata on Creative People"
},
{
"slug": "transcience",
"title": "Transience"
},
{
"slug": "web2000",
"title": "A Love Letter to 2000s Websites"
}
],
"2022": [
{
"slug": "testingandwriting",
"title": "Testing Software for the Same Reason You Write Notes"
}
],
...
}
Voilà! Now I have the data of pages that are referenced. I can now call this method anytime the site regenerates and use this as the source of truth for back-linking.
To consume this in Next.js, I'm going to read the file in getStaticProps
(or in an RSC if I were using the App Router)
// [slug].js
export async function getStaticProps({ params }) {
if (post) {
const file = await fs.readFile(process.cwd() + '/_cache/backlinks.json', 'utf8');
const backlinks = JSON.parse(file);
let pagesLinkingBackTo = null;
if (backlinks[params.slug]) {
pagesLinkingBackTo = backlinks[params.slug];
}
And, following some prop drilling, I can now programmatically display these on matching pages:
// backLinkSection.js
import React from 'react';
import NextLink from './NextLink';
const BacklinksSection = ({pagesLinkingBackTo}) => {
if (!pagesLinkingBackTo) return <></>
return (
<aside>
<h4>
Pages referencing this post:
<ul>
{pagesLinkingBackTo.map(link => (
<li><NextLink href={link.slug}>{link.title}</NextLink> </li>
))}
</ul>
</h4>
</aside>
);
};
export default BacklinksSection;
Assuming I haven't link to this page yet, you can see this in action at the bottom of my Parsing Mardkown in Node post. Now with handy links to click and explore related topics.
I'm excited to keep tending the garden! I've already seen themes emerge through the regular tags I use. Here's to a great harvest someday!
Beethoven – Sonatina No 1 Exposition
Short and sweet this week! A little phrase from a very young Beethoven.
Pratchett on English Gardens
I revisited a passage from Sir Terry Pratchett's "A Slip of the Keyboard." The essay "Magic Kingdoms" illustrates the English dedication to maintaining a garden in any environment. Pratchett uses this as an exemplification for how the garden is a portal to another world, and a widespread fascination with planting a garden is why fantasy is part of the fabric of the culture.
I remember a back garden I used to see from the train. It was a very small garden for a very small house, and it was sandwiched between the thundering railway line, a billboard, and a near-derelict factory.
I don't know what a Frenchman or an Italian would have made of it. A terrace, probably, with a few potted plants and some trellis to conceal the worst of postindustrial squalor. But this was an Englishman's garden, so he'd set out to grow, if not Jerusalem, then at least Jerusalem artichokes. There was a rockery, made of carefully placed concrete lumps (the concrete lump rockery is a great British contribution to horticulture, and I hope one is preserved in some outdoor museum somewhere) There was a pond; the fish probably had to get out to turn around. There were roses. There was a tiny greenhouse made of old window frames nailed together (another great British invention). Never was an area so thoroughly gardened, in fact, as that patch of cat infested soil.
No attempt had been made to screen off the dark satanic mills, unless the runner beans counted. To the gardener, in the garden, they did not exist. They were in another world.
For me there's another comfort in the scene. Even if we're not nurturing plants, we all have the means to cultivate our own creative gardens. A sketchbook, journal, blog, a song. And it doesn't matter how humble! A jar of soil and basil goes a long way for bringing life to a space. So it is with strumming strings and moving the pencil.
Hey!
Configuring a CI/CD Pipeline in CircleCI to Deploy a Docker Image to AWS ECS
Continuous Integration/Continuous Deployment has many benefits in a team's development process! Much of the manual work of pushing changes to production are automated, different processes can be created between staging and production environments, and setting up a CI/CD flow can be a way of practicing Infrastructure As Code. The benefits to having the deployment process documented are all the same as using git in your application code: it's clearly documented, changes can be reverted, and there's a single source of truth for the process.
Here I'll be continuing on from deploying a Docker Image to AWS! This time, looking at integrating the process into a CI/CD process. Namely: CircleCI!
Setup
To configure CircleCI, we'll add this file as .circleci/config.yml
from the root of our application:
version: 2.1
orbs:
aws-cli: circleci/aws-cli@4.0
aws-ecr: circleci/aws-ecr@9.1.0
aws-ecs: circleci/aws-ecs@4.0.0
Here I'm loading in all the necessary orbs that will support our deployment. Orbs can be thought of as a group of pre-defined jobs that support integrations. Let's continue setting things up and seeing these orbs in action:
Checkout Repo
Much of the heavy lifting here will be done by the orbs we're pulling in. The only custom job we'll need to employ is one for checking out our repo on github:
jobs:
checkout-repo:
docker:
- image: cimg/node:20.14
steps:
- checkout
Stepping Through the Workflow
Below the jobs block, it's now time for us to setup our workflow! This is the access point where CircleCI will call our commands and run our jobs.
I'm going to start by naming the workflow build-app
. Under jobs, I'll start with the checkout-repo
flow we just created:
workflows:
build-app:
jobs:
- checkout-repo:
name: checkout-repo
filters:
branches:
only:
- main
Here, I'm also targeting which branch triggers a build. Anytime a PR is merged into main
, the process will fire off.
Next, let's build our docker image. We're going to be configuring the aws-ecr/build_and_push_image
job:
- aws-ecr/build_and_push_image:
requires:
- checkout-repo
account_id: ${ID}
auth:
- aws-cli/setup:
role_arn: ${ROLE}
role_session_name: CircleCISession
dockerfile: Dockerfile
repo: ${REPO}
tag: ${CIRCLE_SHA1}
extra_build_args: >-
--build-arg API_KEY=${API_KEY}
Most of these will be self explanatory if you've deployed to ECR before. One thing worth noting specific to CircleCI is the requires
block. Here, we're adding checkout-repo
as a dependency. I want the job to run sequentially, so here I'm telling CircleCI to wait for the previous step to complete before starting this one.
Also note that I'm passing in CIRCLE_SHA1
to the tag. I'm tagging images here with the unique hashed identifier. This way, all of my images are uniquely identified in ECR. The CIRCLE_SHA1
variable comes for free in any workflow.
Finally, we'll deploy to our ECS service by updating the service:
- aws-ecs/deploy_service_update:
requires:
- aws-ecr/build_and_push_image
cluster: ${CLUSTER}
family: ${FAMILY}
service_name: ${SERVICE}
container_image_name_updates: container=${CONTAINER}, tag=${CIRCLE_SHA1}
force_new_deployment: true
auth:
- aws-cli/setup:
role_arn: ${ROLE}
role_session_name: CircleCISession
Again, much should be familiar from the CLI approach. What's worth highlighting is the container_image_name_updates
property. Since I'm defining the hashed id as the tag name in the previous job, I'm going to update my container image through the arguments container=${CONTAINER}, tag=${CIRCLE_SHA1}
The force_new_deployment
argument is required for new changes to be pushed if the task is already running on ECS. (Which it likely is since this is continuous deployment!)
Full Config
That's it! That's enough to get the app spun up and running. Here's the full config for context:
version: 2.1
orbs:
aws-cli: circleci/aws-cli@4.0
aws-ecr: circleci/aws-ecr@9.1.0
aws-ecs: circleci/aws-ecs@4.0.0
jobs:
checkout-repo:
docker:
- image: cimg/node:20.14
steps:
- checkout
workflows:
build-app:
jobs:
- checkout-repo:
name: checkout-repo
filters:
branches:
only:
- main
- aws-ecr/build_and_push_image:
requires:
- checkout-repo
account_id: ${ID}
auth:
- aws-cli/setup:
role_arn: ${ROLE}
role_session_name: CircleCISession
dockerfile: Dockerfile
repo: ${REPO}
tag: ${CIRCLE_SHA1}
extra_build_args: >-
--build-arg API_KEY=${API_KEY}
- aws-ecs/deploy_service_update:
requires:
- aws-ecr/build_and_push_image
cluster: ${CLUSTER}
family: ${FAMILY}
service_name: ${SERVICE}
container_image_name_updates: container=${CONTAINER}, tag=${CIRCLE_SHA1}
force_new_deployment: true
auth:
- aws-cli/setup:
role_arn: ${ROLE}
role_session_name: CircleCISession