Chris Padilla/Blog


My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.


    New Album — Ice 🧊

    Brr!

    Music somewhere between ambient classical and winter lofi!

    Purchase on 🤘 Bandcamp and Listen on 🙉 Spotify or any of your favorite streaming services!


    Credentials Authentication in Next.js

    Taking an opinionated approach, Next Auth intentionally limits the functionality available for using credentials such as email/password for logging in. The main limit is that this forces a JWT strategy instead of using a database session strategy.

    Understandably so! The number of data leaks making headlines, exposing passwords, has been a major security issue across platforms and services.

    However, the limitation takes some navigating when you are migrating from a separate backend with an existing DB and need to support older users that created accounts with the email/password method.

    Here's how I've been navigating it:

    Setup Credentials Provider

    Following the official docs will get you most of the way there. Here's the setup for my authOptions in app/api/auth/[...nextAuth]/route.js:

    import CredentialsProvider from "next-auth/providers/credentials";
    ...
    providers: [
      CredentialsProvider({
        name: 'Credentials',
        credentials: {
            username: {label: 'Username', type: 'text', placeholder: 'your-email'},
            password: {label: 'Password', type: 'password', placeholder: 'your-password'},
        },
        async authorize(credentials, req) {
            ...
        },
    }),

    Write Authorization Flow

    From here, we need to setup our authorization logic. We'll:

    1. Look up the user in the DB
    2. Verify the password
    3. Handle Matches
    4. Handle Mismatch
    async authorize(credentials, req) {
        try {
            // Add logic here to look up the user from the credentials supplied
            const foundUser = await db.collection('users').findOne({'unique.path': credentials.email});
    
            if(!foundUser) {
                // If you return null then an error will be displayed advising the user to check their details.
                return null;
                // You can also Reject this callback with an Error thus the user will be sent to the error page with the error message as a query parameter
            }
    
            if(!foundUser.unique.path) {
                console.error('No password stored on user account.');
                return null;
            }
    
            const match = checkPassword(foundUser, credentials.password);
    
            if(match) {
                // Important to exclude password from return result
                delete foundUser.services;
    
                return foundUser;
            }
        } catch (e) {
            console.error(e);
        }
        return null;
    
    },

    PII

    The comments explain away most of what's going on. I'll explicitly note that here I'm using a try/catch block to handle everything. When an error occurs, the default behavior is for the error to be sent to the client and displayed. Even an incorrect password error could cause a Personally Identifiable Information (PII) error. By catching the error, we could log it with our own service and simple return null for a more generic error of "Failed login."

    Custom DB Lookup

    I'll leave explicit details out from here on how a password is verified for my use case. But, a general way you may approach this when migration:

    1. Verify with the previous framework/library the encryption method
    2. If possible, transfer over the code/libraries used
    3. Wrap it in a checkPassword() function.

    Sending Passwords over the Wire?

    A concern that came up for me: We hash passwords to the database, but is there an encryption step needed for sending it over the wire?

    Short answer: No. HTTPS covers it for the most part.

    Additionally, Next auth already takes many security steps out of the box. On their site, they list the following:

    • Designed to be secure by default and encourage best practices for safeguarding user data
    • Uses Cross-Site Request Forgery Tokens on POST routes (sign in, sign out)
    • Default cookie policy aims for the most restrictive policy appropriate for each cookie
    • When JSON Web Tokens are enabled, they are encrypted by default (JWE) with A256GCM
    • Auto-generates symmetric signing and encryption keys for developer convenience

    CSRF is the main concern here, and they have us covered!

    Integrating With Other Providers

    Next Auth also allows for using OAuth sign in as well as tokens emailed to the clients. However, it's not a straight shot. Next requires a JWT strategy, while emailing tokens requires a database strategy.

    There's some glue that needs adding from here. A post for another day!


    Fantasia!

    🦛

    I got sick the other week and watched both Fantasias with Miranda. Unbelievably beautiful!


    Automating Image Uploads to Cloudinary with Python

    There's nothing quite like the joy of automating something that you do over and over again.

    This week I wrote a python script to make my life easier with image uploads for this blog. The old routine:

    • Optimize my images locally (something Cloudinary already automates, but I do by hand for...fun?!)
    • Open up the Cloudinary portal
    • Navigate to the right directory
    • Upload the image
    • Copy the url
    • Paste the image into my markdown file
    • Optionally add optimization tag if needed

    I can eliminate most of those steps with a handy script. Here's what I whipped up, with some boilerplate provided by the Cloudinary SDK quick start guide:

    from dotenv import load_dotenv
    load_dotenv()
    
    import cloudinary
    import cloudinary.uploader
    import cloudinary.api
    import pyperclip
    
    config = cloudinary.config(secure=True)
    
    print("****1. Set up and configure the SDK:****\nCredentials: ", config.cloud_name, config.api_key, "\n")
    
    print("Image to upload:")
    input1 = input()
    input1 = input1.replace("'", "").strip()
    
    print("Where is this going? (Art default)")
    
    options = [
        "/chrisdpadilla/blog/art",
        "/chrisdpadilla/blog/images",
        "/chrisdpadilla/albums",
    ]
    
    folder = options[0]
    for i, option in enumerate(options):
        print(f'{i+1} {option}')
    
    selected_number_input = input()
    
    
    if not selected_number_input:
        selected_number_input = 1
    
    selected_number = int(selected_number_input) - 1
    if selected_number <= len(options):
        folder = options[selected_number]
    
    
    res = cloudinary.uploader.upload(input1, unique_filename = False, overwrite=True, folder=folder)
    if res.get('url', ''):
        pyperclip.copy(res['url'])
        
        print('Uploaded! Url Coppied to clipboard:')
        print(res['url'])

    Now, when I run this script in the command line, I can drag an image in, the script will ask where to save the file, and then automatically copy the url to my clipboard. Magic! ✨

    A couple of steps broken down:

    Folders

    I keep different folders for organization. Album art is in one. Blog Images in another. Art in yet another. So first, I select which one I'm looking for:

    print("Where is this going? (Art default)")
    
    options = [
        "/chrisdpadilla/blog/art",
        "/chrisdpadilla/blog/images",
        "/chrisdpadilla/albums",
    ]
    
    folder = options[0]
    for i, option in enumerate(options):
        print(f'{i+1} {option}')
    
    selected_number_input = input()

    and later on, that's passed to the cloudinary API as a folder:

    if not selected_number_input:
        selected_number_input = 1
    
    selected_number = int(selected_number_input) - 1
    if selected_number <= len(options):
        folder = options[selected_number]
    
    
    res = cloudinary.uploader.upload(input1, unique_filename = False, overwrite=True, folder=folder)

    Copying to clipboard

    Definitely the handiest, and it's just a quick install to get it. I'm using pyperclip to make it happen with this one liner:

    if res.get('url', ''):
        pyperclip.copy(res['url'])

    Clementi - Sonatina in F Maj Exposition

    Listen on Youtube

    Note to self: don't wait until a couple of weeks after practicing something to record 😅


    Blue Hair, Don't Care

    Just off to learn martial arts from a turtle guy


    Next Auth Custom Session Data

    I've been tinkering with Next Auth lately, getting familiar with the new App Router and React Server Components. Both have made for a big paradigm shift, and a really exciting one at that!

    With all the brand new tech, and with many people hard at work on Next Auth to integrate with all of the new hotness, there's still a bit of transition going on. For me, I found I had to do a bit more digging to really setup Next Auth in my project, so here are some of the holes that ended up getting filled:

    Getting User Data from DB through JWT Strategy

    When you use a database adapter, Next auth automates saving and update user data. When migrating an existing app and db to Next auth, you'll likely want to handle the db interactions yourself to fit your current implementation.

    Here's what the authOptions looked like for an OAuth provider:

    export const authOptions = {
        // adapter: MongoDBAdapter(db),
        providers: [
            GithubProvider({
                clientId: process.env.GITHUB_ID,
                clientSecret: process.env.GITHUB_SECRET,
                session: {
                    jwt: true,
                    maxAge: 30 * 24 * 60 * 60,
                },
            }),
        ],
        secret: process.env.NEXTAUTH_SECRET,
    };

    Notice that I'm leaving the adapter above out and using the jwt strategy here.

    There's a bit of extra work to be done here. The session will save the OAuth data and send it along with the token. But, more than likely, you'll have your own information about the user that you'd like to send, such as roles within your own application.

    To do that, we need to add a callbacks object to the authOptions with a jwt and session methods:

    async jwt({token, user}) {
            if(user) {
                token.user = user;
                const {roles} = await db.users.findOne(query)
                token.roles = roles;
            }
            return token;
        },
            
    async session({session, token}) {
            if(token.roles) {
                session.roles = token.roles;
            }
    
            return session;
        },

    So there's a bit of hot-potato going on. On initial sign in, we'll get the OAuth user data, and then reference our db to find the appropriate user. From there, we pass that to the token, which is then extracted into the session later on.

    Once that's set, you'll want to pass these authOptions in every time you call getServerSession so that these callbacks are used to grab the dbUser field. Here's an example in a server action:

    import React from 'react';
    import {getServerSession} from 'next-auth';
    import { authOptions } from '@api/[...nextauth]/route';
    import Button from './Button';
    
    export default async function ServerActionPage() {
        const printName = async () => {
            'use server';
    
            const session = await getServerSession(authOptions);
            console.log(session);
            return session?.user?.name || 'Not Logged In';
        };
    
        return (
            <div className="m-20">
                <Button action={printName} />
            </div>
        );
    }

    When that's logged, we'll get the OAuth user info and the roles we passed in from our db:

    {
        user: {...}
        roles: [...]
    }

    Just Friends

    Listen on Youtube

    Lovers no more~


    Trombone Gesture

    Toot!


    Brunner - Rondoletto

    Listen on Youtube

    Biiiiig finger twister! 🔀


    WHO has the smoothest moves?

    Hootin' and scootin'


    Structs in Go

    There are two ways of creating datatypes similar to JavaScript Objects and Python Dictionaries in Go: Structs and Maps.

    Structs are a collection of data that are related. Values are stored next to each other in memory. Structs are also a value type.

    Maps are a hash map data type. They are a key value pair where both keys and values are statically typed individually. So all keys need to be of the same type, and all values need to be the same type. The main benefit is that, as a hash map, indexing and look up is much faster.

    Let's break all that down:

    Values are stored next to each other assuming the value will be lightweight. In this way, it's similar to an array where the keys are strings. Though, the values are not indexed the way that a hash map would for its keys. The tradeoff is that the value is lighter on memory, but slower to iterate through.

    Structs are a value type. So if we were to pass them into a function, the entire struct would be copied. Maps, on the other hand, are a reference type. The address in memory for the Map is passed into a function and any changes to the map within the function will occur as a side effect to the same Map.

    Structs

    Declaring structs requires a type to be created first:

    type car struct {
        make        string
        model        string
        maxSpeed    int
    }
    
    c := car{make: "Toyota", model: "Prius", maxSpeed: 120}

    Methods

    Go isn't an Object Oriented Language, but like JavaScript, can be implemented with similar principles. An example is having methods on Structs:

    type car struct {
        make        string
        model        string
        maxSpeed    int
    }
    
    func (c car) floorIt() int {
        return c.maxSpeed
    }
    
    c := car{make: "Toyota", model: "Prius", maxSpeed: 120}
    c.floorIt() // 120

    Embedding

    Another OOP principle borrowed in Go is composition. In Go, we can embed structs to create more complex types while still maintaining the flexibility of smaller pieces available as individual structs.

    type car struct {
        make        string
        model        string
        maxSpeed    int
    }
    
    type raceCar struct {
        car
        turboEngine    string
    }
    
    rc := raceCar{
        car: car{
            make: "Toyota",
            model: "Prius",
            maxSpeed: 120,
        },
        turboEngine: "MAX"
    }

    Go Performance

    Performance and Memory

    When looking at a language's performance, the two considerations here are memory usage and performance speed.

    Taking two ends of the spectrum, we could look at Rust on one end and C# on the other.

    C# is a high level language that requires interpreting through a Virtual Machine. (A strange example, perhaps, because C# does compile, but only to the Intermediary Language, and not directly to machine code) C# Also handles memory management. The overhead of the virtual machine and memory management leads to a fluid developer experience, but makes compromises in performance speed and memory usage.

    Rust, on the other hand, is a compiled language. Meaning that to run rust, you'll build an executable from the human readable Rust code. This saves the time it would take a virtual machine to interpret the language. Or, in the case of Python or Ruby, it would eliminate the time it takes for the runtime to interpret the script.

    Rust also requires the developer to do much of their own memory management. When written properly, this allows for really performant applications, since that's an extra bit of overhead taken off from running Rust code.

    Where Go Fits

    Go uniquely sits an a great position between these two ends to balance the benefits offered by a higher level language while still providing the speed of a compiled language.

    Go does compile to machine code. You can run go build main.go to compile the script down to an exe file. So we get the benefit of quick execution, eliminating the need for interpretation time.

    While doing so, Go bundles a much lighter package called the Go Runtime that handles Garbage Collection. With a specialized focus on memory management, this still allows for that DX experience while not adding as much overhead as the Java Runtime Environment or the Common Language Runtime in C#.

    Comparisons of Go's speed are right between the end of compiled, non-Garbage Collected languages like C, C++, and Rust, and the higher level language features of Java and C#.

    One added benefit of being compiled is having one less dependency in your deployment environment. The platform isn't required to have a specific version of a Go interpreter available to execute a program.


    Parkening - A Minor Study

    Listen on Youtube

    Lucy can tell when I'm about to finish recording, she knows rubs are soon to follow!


    Halloween!

    Halloweeeeeeeeen!! 🎃

    Hello down there!

    I got a rock

    Bone-jour