Hanami and loading code, faster

I’ll be giving a talk in November in at SF Ruby Conference (tickets on sale now!). My talk is speeding up your application’s development cycle by taking a critical eye at your application’s development boot. Which all boils down to do less. In Ruby, the easiest, though not the simplest, is to load less code. So yeah, autoloading.

To expand my horizons and hopefully give a better talk, I branched out beyond my experience with Ruby on Rails to talk to Tim Riley about Hanami and how it handles code loading during development.

The following are my notes; it’s not a critical review of Hanami, and it only looks into a very narrow topic: code loading and development performance.

Ruby, and analogously Rails

Ruby has a global namespace; constants (classes, modules, CONSTANTS) are global singletons. When your code (or some code you’re loading—Ruby calls each file it loads a “feature” identified by its filesystem path) defines a constant, Ruby is evaluating everything about the constant: the class body, class attributes, basically anything that isn’t in a block or a method definition. And so any constants that are referenced in the code also need to be loaded and evaluated, and class ancestors, and their code and so forth. That’s the main reason booting an application is slow: doing stuff just to load the code that defines all the constants so the program can run.

The name of the game in development, where you want to run a single test or browser a single route or open the CLI, is load less. If you can just avoid loading the constant, you can avoid loading the file the constant is defined in, and avoid loading all of its other dependencies and references until later, when you really need them (or never, in development).

The most common strategy for deferring stuff is: use a string as a stand-in for the constant, and only later, when you really need to convert the string to a constant, do it. An example is in Rails Routes, where you’ll write to: “mycontroller#index” and not MyController. At some point the mycontroller gets constantized to MyController, but that’s later, when you hit that particular route. Another example is Active Record Relation definitions, where you’ll use class_name: “MyModel" instead of class_name: MyModel, which only gets constantized when you use record.my_models.

In Rails, a lot of performance repair work for development is identifying places where a constant shouldn’t be directly referenced and instead should use some other stand-in until it’s really needed. In Rails, it can be confusing, because sometimes you can use a configuration string to refer to a constant, and sometimes you have to use a constant; it is inconsistent.

In Hanami, (nearly) everything has a string key

Hanami’s approach: make all the application components referencable by a string, called a key. (again, Hanami does quite a bit more than that, I just mean in regards to code loading). Objects are configured by what keys they have dependencies upon, and those objects are injected by the framework. So instead of writing this:

class MyClass
  cattr_accessor :api_client
  self.api_client = ApiClient.new # <-- loads that constants

  def transmit_something
    MyClass.api_client.transmit("something")
  end
end

…you would instead use Hanami’s Deps and write:

class MyClass
  include Deps["api_client"] # <-- injects the object

  def transmit_something
    api_client.transmit("something")
  end
end

Keys are global, and keys whose objects have been loaded live in Hanami.app.keys . If the key’s object hasn’t been loaded yet, it will be converted from a string to… whatever (not just constants)… when it’s needed to execute. Individual objects can be accessed with Hanami.app["thekey"] when debugging, but normal code should get them injected from Deps. By convention, keys match a class name but they don’t have to. This is powered by dry-system.

Not everything has to have a key. Functional components in Hanami have a key, but classes that embody a bit of data (in Hanami these are called Structs) do not have entries in the app container, and therefore don’t have keys.

If you have something functional coming from outside Hanami, like that ApiClient in the code above or coming from a non-Hanami specific gem or wherever, then you can give them a key and define their lifecycle within the application via a Provider.

Briefly, commentary: Some common Rails development discourse is “Rails is too magic”, which is leveled because Rails framework can work out what constants you mean without directly referencing them (e.g. has_many :comments implies there’s an Active Record Comment), and “just use a PORO” (plain old ruby object) when a developer is trying to painfully jam everything into narrow Rails framework primitives. With Hanami:

  • Hanami has quite a bit of like “here’s a string, now it’s an object 🪄” , but it is consistently applied everywhere and has some nice benefits beyond just brevity, like overloading dependencies.
  • Everything does sorta have to be fit into the framework, but there’s an explicit interface for doing so.

Assorted notes in this general theme

  • Providers are like “Rails initializers but with more juice” – they register components in the container. They have lifecycle hooks (prepare, start, stop) for managing resources. They’re lazily loaded and can have namespace capabilities for organizing related components.
  • Hanami encourages namespacing over Rails’ flat structure. “Slices” provide first-class support for modularizing applications like Rails Engines. Each slice has its own container and can have its own providers, creating bounded contexts.
  • Hanami uses Zeitwerk for code loading.
  • Dev server uses Guard to restart puma in development. Because everything is so modularized, it’s good enough.
  • Code is lazy-loaded in development but fully pre-loaded in production.

Where things are going

In the Hanami Discord, Tim shared a proposal for building out a plugin system for Hanami… and to me looks a lot like Railties and ActiveSupport lazy load hooks:

Using your grant, I propose to implement this Hanami extensions API. The end
goal will be to:

  • Allow all first-party “framework extension code” to move from the core Hanami
    gem back into the respective Hanami subsystem gems (e.g. the core Hanami
    gem should no longer have specific extension logic for views).
  • Allow third-party gems to integrate with Hanami on an equal footing to the first-
    party gems.

This will require building at least some of the following:

  • Ability for extensions to be detected by or registered with the Hanami framework.
  • Ability to enhance or replace Hanami CLI commands.
  • Ability to register new configuration settings on the Hanami app.
  • Hooks for extending core Hanami classes.
  • Hooks for adding logic to Hanami’s app boot process.
  • Adjustments to first-party Hanami gems to allow their classes to be used in an un-extended state when required.
  • A separate “extension” gem that can allow Hanami extensions to register their extensions without depending on the main Hanami gem.

And how this all started

Ending on what I originally shared with Tim to start our discussion, which I share partly cause I think it’s funny how easily I can type out 500 words today on a thesis of like “why code loading in Ruby is hard”:

Making boot fast; don’t load the code unless you need it

Don’t load code until/unless you need it. DEFINITELY don’t create database connections or make any http calls or invoke other services. How Rails does it, Rails autoloads as much as possible (framework, plugin/extension, and application code), either via Ruby Autoload or Zeitwerk. The architecture challenge is: how do you set up configuration properties, so that when the code is loaded (and all the different pieces of framework/plugin/extension/application get their fingers on it), it is configured with the properties y’all ultimately want on it? There are two mechanisms:

  • A configuration hash, that is intended to be made up (somewhat) of primitives that are dependency free and thus don’t load a bunch of code themselves,
  • A callback hook that is placed within autoloaded code, that one can register against and use it to pull data out of configuration (framework/plugin/extension) or override/overload behavior (your application), that is only triggered when the code is loaded for reals. Extensions put this in a Railtie, maybe you put it in an initializer.,
    The practical problems are:

  • Ideally everything was stateless and just pulled values from configuration and got torn down after every request/transaction/task, but also:
    • Some objects are long-lived, and you don’t want to constantly be tearing them down,
    • Sometimes locality of properties is nice and it would be annoying to be like “either use this locally assigned value OR use this value from really far away in this super deep config object”.,
    • Hopefully that config object is thread and fiber safe if you’re gonna be changing it later and you’re not really sure what’s happening right then in your application lifecycle.,
  • A hook doesn’t exist in the place that you want to hook into, so you either have to:
    • go upstream and get a hook added; which is annoying (just hook every class and feature, why not?!),
    • load the code prematurely so you can directly modify it,
  • When something else (framework/plugin/extension/application) prematurely loads the code (chaotically or intentionally), before you add your own configuration or before you register a hook callback, and the behavior is stateful or had to be backed out (example: it’s configuration for connections in a connection pool and early invocation fills the pool with connection objects with premature configuration. So to re-configure you have to drain the pool of the old prematurely configured connections and maybe that’s hard),
  • Examples of pain:
    • Devise.
      • It’s route (devise_for) loads your active record model, when routes load, which in < Rails 8.0 was when your app boots, which is premature otherwise,
      • Changing the layout of devise controllers. They don’t have load hooks (maybe they should?). You can subclass them and manually mount them in your app, but htat’s annoying,
    • Every initializer where you try to assign config and maybe it won’t work cause something else already hooked it and loaded it and it’s baked.,

How Hanami does it:

@inouire in the Rails Discord shared a couple of links: You can find some information about Hanami way of handling dependency container: https://guides.hanamirb.org/v2.2/app/container-and-components/ Also autoloading: https://guides.hanamirb.org/v2.2/app/autoloading/ And info about lazy boot: https://guides.hanamirb.org/v2.2/app/booting/

Hanami questions from Ben:

  • Components are singletons that are pure-ish functions? Do they get torn down / recreated on every request, or does the same object exist for the lifetime of the application?,
  • Is there a pattern of assigning properties to class variables? Seems like most stuff is pure-ish functions. How do you handle objects that you want to be long-lived, like Twitter::Client.new or something?,
  • I didn’t see plugins/extensions. Are you required to subclass and overload a component or can you poke around in an existing class/component? Can I defer poking around in a component until it’s loaded? (like an autoload hook),
  • Are there any patterns you see people do, that would slow down their hanami app’s boot, that you wish they didn’t do?

Serializing ViewComponent for Active Job and Turbo Broadcast Later

I recently started using ViewComponent. I’ve been gradually removing non-omikase libraries from my Rails applications over the past decade, but ViewComponent is alright. I was strongly motivated by Boring Rails’ “Hotwire components that refresh themselves”, cause matching up all the dom ids and stream targets between views/partials and… wherever you put your Stream and Broadcast renderers is a pain.

You might also know me as the GoodJob author. So of course I wanted to have my Hotwire components refresh themselves later and move stream broadcast rendering into a background job. I to simply call MessagesComponent.add_message(message) and broadcasts an update later to the correct stream and target that are all nice and compactly stored inside of the View Component:

class MessagesComponent < ApplicationComponent
  def self.add_message(message)
    user = message.user
    Turbo::StreamsChannel.broadcast_action_later_to(
      user, :message_list,
      action: :append,
      target: ActionView::RecordIdentifier.dom_id(user, :messages),
      renderable: MessageComponent.serializable(message: message), # <- that right there
      layout: false
    )
  end

  def initialize(user:, messages:)
    @user = user
    @messages = messages
  end

  erb_template <<~HTML
    <%= helpers.turbo_stream_from @user, :message_list %>
    <div id="<%= dom_id(@user, :messages) %>">
      <%= render MessageComponent.with_collection @messages %>
    </div>
  HTML
end

That’s a simple example

Making a renderable work later

The ViewComponent team can be really proud of achieving first-class support Rails for a library like ViewComponent. Rails already supported views and partials and now it also supports an object that quacks like a renderable.

For ViewComponent to be compatible with Turbo Broadcasting later, those View Components need to be serializable by Active Job. That’s because Turbo Rail’s broadcast_*_later_to takes the arguments it was passed and serializes them into a job so they can be run elsewhere better/faster/stronger.

To serialize a ViewComponent, we need to collect its initialization arguments, so that we can reconstitute it in that elsewhere place where the job is executed and the ViewComponent is re-initialized. To initialize a ViewComponent, you call new which calls its initialize method. To patch into that, there’s a couple of different strategies I thought of taking:

  • Make the developer figure out which properties of an existing ViewComponent (ivars, attributes) should be grabbed and how to do that.
  • prepend a module method in front of ViewComponent#initialize. Our module would always have to be at the top of the ancestors hierarchy, because subclasses might overload initialize themselves, so we’d have to have an inherited callback that would prepend the module (again) every time that happened
  • Simply initialize the ViewComponent via another, more easily interceptable method, when you want it to be serializable.

I respect that ViewComponent really wanted a ViewComponent to be just like any other Ruby object that you create with new and initialize , but it makes this particular goal, serialization, rather difficult. You can maybe see the ViewComponent maintainers ran into a few problems with initialization themselves: a collection of ViewComponents can optionally have each member initialized with an iteration number, but to do that ViewComponent has to introspect the initialize parameters to determine if the object implements the iteration parameter to decide whether to send it 🫠 That parameter introspection also means that we can’t simply prepend a redefined generic initialize(*args, **kwargs) because that would break the collection feature. Not great 💛

So, given the compromises i’m willing to make between ergonomics and complexity and performance, given my abilities, and my experience, and what I know at this time…. I decided to simply make a new initializing class method, named serializable. If I want my ViewComponent to be serializable, I initialize it with MyComponent.serializable(foo, bar:).

# frozen_string_literal: true
# config/initializers/view_component.rb
#
# Instantiate a ViewComponents that is (optionally) serializable by Active Job
# but otherwise behaves like a normal ViewComponent. This allows it to be passed
# as a renderable into `broadcast_action_later_to`.
#
# To use, include the `ViewComponent::Serializable` concern:
#
#  class ApplicationComponent < ViewComponent::Base
#    include ViewComponent::Serializable
#  end
#
# And then call `serializable` instead of `new` when instantiating:
#
#   Turbo::StreamsChannel.broadcast_action_later_to(
#     :admin, client, :messages,
#     action: :update,
#     target: ActionView::RecordIdentifier.dom_id(client, :messages),
#     renderable: MessageComponent.serializable(message: message)
#   )
#
module ViewComponent
  module Serializable
    extend ActiveSupport::Concern

    included do
      attr_reader :serializable_args
    end

    class_methods do
      def serializable(*args)
        new(*args).tap do |instance|
          instance.instance_variable_set(:@serializable_args, args)
        end
      end
      ruby2_keywords(:serializable)
    end
  end
end

class ViewComponentSerializer < ActiveJob::Serializers::ObjectSerializer
  def serialize?(argument)
    argument.is_a?(ViewComponent::Base) && argument.respond_to?(:serializable_args)
  end

  def serialize(view_component)
    super(
      "component" => view_component.class.name,
      "arguments" => ActiveJob::Arguments.serialize(view_component.serializable_args),
    )
  end

  def deserialize(hash)
    hash["component"].safe_constantize&.new(*ActiveJob::Arguments.deserialize(hash["arguments"]))
  end

  ActiveJob::Serializers.add_serializers(self)
end

Real talk: I haven’t packaged this into a gem. I didn’t want to maintain it for everyone, and there’s some View Component features (like collections) it doesn’t handle yet because I haven’t used them (yet). I think this sort of thing is first class behavior for the current state of Rails and Active Job and Turbo, and I’d rather the library maintainers figure out what the best balance of ergonomics, complexity, and performance is for them. I’ve been gently poking them about it in their Slack; they’re great and I believe we can arrive at something even better than this patch I’m running with myself for now 💖

Notes from building a “who is doing what right now on our website?” presence feature with Action Cable

A screenshot of my application with little presence indicators decorating content

I recently was heads down building a “presence” feature for the case and communications management part of my startup’s admin dashboard. The idea being that our internal staff can see what their colleagues are working on, better collaboarate together as a team of overlapping responsibility, and reduce duplicative work.

The follow is more my notes than a cohesive narrative. But maybe you’ll get something out of it.

Big props

In building this feature, I got a lot of value from:

  • Basecamp’s Campfire app, recently open sourced, which has a sorta similar feature.
  • Rob Race’s Developer Notes about building a Presence Feature
  • AI slop, largely Jetbrains Junie agent. Not because it contributed code to the final feature, but because I had the agent try to implement from scratch 3 different times, and while none of them fully worked (let alone met my quality standards or covered all edges), it helped sharpen the outlines and common shapes and surfaced some API methods to click into that I wasn’t aware of. And made the difference between undirected poking around vs being like “ok, this is gonna require no more than 5 objects in various places working together; let’s go!”

The big idea

The feature I wanted to build would track multiple presence keys at the same time. So if someone is on a deep page (/admin/clients/1/messages) they’d be present for that specific client, any client, as well as the dashboard a whole.

I also wanted to keep separate “track my presence” and “display everyone’s presence”.

What I ended up with was:

  1. Client in the browser subscribes to the PresenceChannel with a key param. It also sets up a setInterval heartbeat to send down a touch message every 30 seconds. This is a Stimulus controller that uses the Turbo cable connection, cause it’s there.
  2. On the server, the PresenceChannel has connected, disconnected, and touch actions and stores the key passed during connect. It writes to an Active Record model UserPresence and calls increment, decrement, and touch respectively.
  3. The Active Record model persists all these things atomically (Postgres!) and then triggers vanilla Turbo Stream Broadcast Laters (GoodJob!).
  4. The frontend visually is all done with vanilla Turbo Stream Broadcasts over the vanilla Turbo::StreamsChannel appending to and removing unique dom elements that are avatars of the present users.

It works! I’m happy with it.

Ok, let’s get some grumbles out.

Action Cable could have a bit more conceptual integrity

I once built some Action Cable powered features about 7 years ago, before Turbo Broadcast Streams, and it wasn’t my favorite. Since then, Turbo Broadcast Streams totally redeemed my feelings about Action Cable… and then I had to go real deep again on Action Cable to build this Presence feature.

At first I thought it was me, “why am I not just getting this?”, but as I became more familiar I came to the conclusion: nah, there’s just a lot of conceptual… noise… in the interface. I get it, it’s complicated.

In the browser/client: You have a Connection, a Connection “opens” and “closes”, but also “reconnects” (reopens?). Then you create a Subscription on the Connection by Subscribing to a named Channel (which is a backend/server concept); Subscriptions have a “connected” callback when “subscription has been successfully completed” (subscribed?) and “disconnected” “when the client has disconnected with the server” (a Connection disconnect). If the Connection closes, reconnects, and reopens, then the Channel’s disconnected and reconnected callbacks are triggered again. Subscriptions can also be “rejected”. You can see some of this drift too in the message types key/value constants .

…as a concrete example: you don’t connection.subscribe(channelName, ...) you consumer.subscriptions.create(channelName, ...) (oh jeez, it’s called Consumer). Turbo Rails tries to clean up some of this as you can call cable.subscribeTo(channelName, ...)to subscribe to a Channel using Turbo Stream Broadcasts’ existing connection. But even that is compromised because you don’t subscribeTo a channel, you subscribeTo by passing an object of { channel: channelName, params: paramsforChannelSubscribe } . Here’s an example from Campfire.

On the server, I have accepted that the Connection/Channel/Streams challenges me, which is probably because of the inherent complexity of multiplexing Streams (no, not Turbo “Streams”, Action Cable “Streams”) over Channels that are themselves multiplexed over connection(s), and it makes my head spin. . That Channels connect Streams, and one Broadcasts on Streams, and one can also transmit on a channel to a specific client in a Channel, and often one does broadcast(channel, payload) but channel may be the name of a Stream. My intuition is that Streams were bolted onto Action Cable’s Chanel implementation rather part of the initial conception though it all landed in Rails at once.

I’m a pedantic person, and it’s tiring for me to write about this stuff with precision. Active Storage named variants—with its record-blob-variant-blob-record—has as an analogous vibe of “I guess it works and I have a hard time looking directly at it”.

I have immense compassion and sympathy and empathy for trying to wrangle something as complex as Action Cable. And also fyi, it is a lot.

Testing

  • You’ll need to isolate and reset Action Cable after individual tests to prevent queries from being made after the transaction rollback, or changing of pinned database connection:ActionCable.server.restart
  • If you see deadlocks, pg.exec freezes or AR gives you undefined method 'count' for nil inside of Active Record because the query result object is nil, that’s a sign that the database connection is being read out-of-order/unsafely asynchronously/all whack.

Page lifecycle

Live and die by the Browser Page Lifecycle API.

Even with data-turbo-permanent, Stimulus controllers and turbo-cable-streams JavaScript get disconnected and reconnected. Notice that there is a lot of use of nextTick/nextFrame to try to smooth over it.

  • hotwired/turbo: [ does not work as permanent](https://github.com/hotwired/turbo/issues/868#issuecomment-1419631586)
  • Miles Woodroffe: “Out of body experience with turbo” about DOM connect/disconnects during Turbo Drive

And general nits that otherwise would necessitate less delicate coding.

I ended up making a whole new custom element data-permanent-cable-stream-source. All that to wait a tick before actually unsubscribing the channel in case the element is reconnected to the page again by data-turbo-permanent. What does that mean for unload events? Beats me for now.

What am I doing about it?

All this work did generate some upstream issues and PRs. I mostly worked around them in my own app, but maybe we’ll roll the rock uphill a little bit:

Notes, right?

Yep, these are my notes. Maybe they’re helpful. No big denouement. The feature works, I’m happy with it, my teammates are happy, and I probably wouldn’t have attempted it at all if I didn’t have such positive thoughts about Action Cable going in, even if the work itself got deeply into the weeds.

Serializing ViewComponent for Active Job and Turbo broadcast later

I recently started using ViewComponent. I’ve been gradually removing non-omikase libraries from my Rails applications over the past decade, but ViewComponent is alright. I was strongly motivated by Boring Rails’ “Hotwire components that refresh themselves”, cause matching up all the dom ids and stream targets between views/partials and …. wherever you put your Stream and Broadcast renderers is a pain.

You might be familiar with me as the GoodJob author. So of course I wanted to have my Hotwire components refresh themselves later and move stream broadcast rendering into a background job. I simply call MessagesComponent.add_message(message) and broadcasts an update later to the correct stream and target that are all nice and local when defined inside the View Component:

class MessageListComponent < ApplicationComponent
  def self.add_message(message)
    user = message.user
    Turbo::StreamsChannel.broadcast_action_later_to(
      user, :message_list,
      action: :append,
      target: ActionView::RecordIdentifier.dom_id(user, :message_list),
      renderable: MessageComponent.serializable(message: message), # <- that right there
      layout: false
    )
  end

  def initialize(user:, messages:)
    @user = user
    @messages = messages
  end

  erb_template <<~HTML
    <%= helpers.turbo_stream_from @user, :message_list %>
    <div id="<%= dom_id(@user, :message_list) %>">
      <%= render MessageComponent.with_collection @messages %>
    </div>
  HTML
end

That’s a simple example.

Making a renderable work later

The ViewComponent team can be really proud of adding first-class support to Rails for a library like ViewComponent. Rails already supported views and partials and now it also supports an object that quacks like a renderable .

For ViewComponent to be compatible with Turbo Broadcasting later, those View Components need to be serializable by Active Job. That’s because Turbo Rail’s broadcast_*_later_to takes the arguments it was passed and serializes them into a job so they can be run elsewhere better/faster/stronger.

To serialize a ViewComponent, we need to collect its initialization arguments so that we can reconstitute it in that elsewhere place where the job is executed and the ViewComponent is re-initialized. To initialize a ViewComponent, you call new which calls its initialize method. To patch into that, there are a couple of different strategies I thought of taking:

  • Make the developer figure out which properties of an existing ViewComponent (ivars, attributes) should be grabbed and how to do that.

  • prepend a module method in front of ViewComponent#initialize. Our module would always have to be at the top of the ancestors hierarchy, because subclasses might overload initialize themselves, so we’d have to have an inherited callback that would prepend the module (again) every time that happened

  • Simply initialize the ViewComponent via another, more easily interceptable method, when you want it to be serializable.

I respect that ViewComponent maintainers really want a ViewComponent to be just like any other Ruby object that you create with new and initialize , but it makes this particular goal, serialization, rather difficult. You can maybe see the ViewComponent maintainers ran into a few problems with initialization themselves: a collection of ViewComponents can optionally have each member initialized with an iteration number, but to do that ViewComponent has to introspect the initialize parameters to determine if the object implements the iteration parameter to decide whether to send it 🫠 That parameter introspection also means that we can’t simply prepend a redefined generic initialize(*args, **kwargs) because that would break the collection feature. Not great 💛

So, given the compromises I’m willing to make between ergonomics and complexity and performance, given my abilities, and my experience, and what I know at this time…. I decided to simply make a new initializing class method, named serializable. If I want my ViewComponent to be serializable, I initialize it with MyComponent.serializable(foo, bar:).

# config/initializers/view_component.rb
#
# Instantiate a ViewComponents that is (optionally) serializable by Active Job
# but otherwise behaves like a normal ViewComponent. This allows it to be passed
# as a renderable into `broadcast_action_later_to`.
#
# To use, include the `ViewComponent::Serializable` concern:
#
#  class ApplicationComponent < ViewComponent::Base
#    include ViewComponent::Serializable
#  end
#
# And then call `serializable` instead of `new` when instantiating:
#
#   Turbo::StreamsChannel.broadcast_action_later_to(
#     :admin, user, :messages,
#     action: :update,
#     target: ActionView::RecordIdentifier.dom_id(user, :messages),
#     renderable: MessageComponent.serializable(message: message)
#   )
#
module ViewComponent
  module Serializable
    extend ActiveSupport::Concern

    included do
      attr_reader :serializable_args
    end

    class_methods do
      def serializable(*args)
        new(*args).tap do |instance|
          instance.instance_variable_set(:@serializable_args, args)
        end
      end
      ruby2_keywords(:serializable)
    end
  end
end

class ViewComponentSerializer < ActiveJob::Serializers::ObjectSerializer
  def serialize?(argument)
    argument.is_a?(ViewComponent::Base) && argument.respond_to?(:serializable_args)
  end

  def serialize(view_component)
    super(
      "component" => view_component.class.name,
      "arguments" => ActiveJob::Arguments.serialize(view_component.serializable_args),
    )
  end

  def deserialize(hash)
    hash["component"].safe_constantize&.new(*ActiveJob::Arguments.deserialize(hash["arguments"]))
  end

  ActiveJob::Serializers.add_serializers(self)
end

Real talk: I haven’t packaged this into a gem. I didn’t want to maintain it for everyone, and there are some View Component features (like collections) it doesn’t handle yet because I haven’t used them (yet). I think this sort of thing is first-class behavior for the current state of Rails and Active Job and Turbo, and I’d rather the library maintainers figure out what the best balance of ergonomics, complexity, and performance is for them. I’ve been gently poking them about it in their Slack; they’re great 💖

Building deterministic, reproducible assets with Sprockets

This is a story that begins with airplane wifi, and ends with the recognition that everything is related in web development.

While on slow airplane wifi, I was syncing this blog’s git repo, and it was taking forever. That was surprising because this blog is mostly text, which I expected shouldn’t require many bits to transfer for Git. Looking more deeply into it (I had a 4-hour flight), I discovered that the vast majority of the bits were in the git branch of built assets that gets deployed to GitHub Pages (gh-pages) when I build my Rails app into a static site with Parklife. And the bits in that branch were assets (css, javascript, and a few icons and fonts) built by Sprockets, whose contents were changing every time the blog was built and published. What changed?

  • Sprockets creates a file manifest that is randomly named ".sprockets-manifest-#{SecureRandom.hex(16)}.json".
  • Within the file manifest, there is an entry for every file built by Sprockets, that includes that original asset’s mtime—when the file on the filesystem was last touched, even if the contents didn’t change.
  • By default, Sprockets generates gzipped .gz copies of compressible assets, and it includes the uncompressed file’s mtime in the gzipped file’s header, producing different binary content even though the compressed payloads’ contents didn’t change.

Do I need that? Let’s go through it.

The Sprockets Manifest

The Sprockets Manifest is pretty cool (I mean public/assets/.sprockets-manifest-*.json, not app/assets/config/manifest.js which is different). The manifest is how Sprockets is able to add unique cache-breaking digests to each file while still remembering what the file was originally named. When building assets on a server with a persisted filesystem, Sprockets also uses the manifest to keep old versions of files around: bin/rails assets:clean will keep the last 3 versions of built assets, which is helpful for blue-green deployments. Heroku also has a bunch of custom stuff powered by this too to make deployments seamless.

But none of that is applicable to me and this blog, which gets built from scratch and committed to Git. Or for that matter, when I build some of my other Rails apps with Docker; not unnecessarily busting my cached file layers would be nice 💅

The following is a monkeypatch, which works with Sprockets right now but I’m hoping to ultimately propose as a configuration option upstream (as others have proposed).

# config/initializers/sprockets.rb
module SprocketsManifestExt
  def generate_manifest_path
    # Always generate the same filename
    ".sprockets-manifest-#{'0' * 32}.json"
  end

  def save
    # Use the epoch as the mtime for everything
    zero_time = Time.at(0).utc
    @data["files"].each do |(_path, asset)|
      asset["mtime"] = zero_time
    end

    super
  end

  Sprockets::Manifest.prepend self
end

Now, if you’re like me (on a plane), you might be curious about why the obsessive tracking of mtime. I have worked alongside several people in my career with content-addressable storage obsessions. The idea being: focus on the contents, not the container. And mtime is very much a concern of the container. But Sprockets makes the case that “Compiling assets is slow” so I can see it’s useful to quickly check when the file was modified, in a lot of cases… but not mine.

Let’s move on.

GZip, but maybe you don’t need it

So… everything in web development is connected. While wondering why new copies of every .gz file were being committed on every build, I remembered what my buddy Rob recently did in Rails: MakeActiveSupport::Gzip.compressdeterministic.

I have some tests of code that uses ActiveSupport::Gzip.compress that have been flaky for a long time, and recently discovered this is because the output of that method includes the timestamp of when it was compressed. If two calls with the same input happen during different seconds, then you get different output (so, in my flaky tests, they fail to compare correctly).

GZip takes a parameter called mtime, which is stored and changes the timestamp of the compressed file(s) when they are uncompressed. It changes the content of the gzipped file, because it stores the timestamp in the contents of the file, but doesn’t affect the mtime of the gzipped file container.

So in the case of Sprockets, if the modification date of the uncompressed asset changes, regardless of whether its contents have changed, a new and different (according to git or Docker) gzipped file will be generated. This was really bloating up my git repo.

Props to Rack maintainer Richard Schneeman who dug further down this hole previously, admirably asking the zlib group themselves for advice. The commentary made a mention of nginx docs, which I assume is for ngx_http_gzip_static_module which says:

The files can be compressed using the gzip command, or any other compatible one. It is recommended that the modification date and time of the original and compressed files be the same.

But that’s not GZip#mtime value stored inside the contents of the gzip file, that’s the mtime of the .gz file container. Sprockets also sets that, with File.utime.

It’s easy enough to patch the mtime to the “unknown” value of 0:

# config/initializers/sprockets.rb
module SprocketsGzipExt
  def compress(file, _target)
    archiver.call(file, source, 0)
    nil
  end

  Sprockets::Utils::Gzip.prepend self
end

…though if you’re in my shoes, you might not even need these gzipped assets. afaict only Nginx makes use of them with the non-default ngx_http_gzip_static_module module; Apache requires some complicated RewriteRules; Puma doesn’t serve them, CDNs don’t request them. Maybe turn them off? 🤷

# config/initializers/sprockets.rb
Rails.application.configure do
  config.assets.gzip = false
end

Fun fact: that configuration was undocumented

Maybe please don’t even pass mtime to gzip for web assets

All of this stuff about file modification dates reminded me of another thing I had once previously rabbit-holed on, which was poorly behaved conditional requests in RSS Readers. The bad behavior involved inappropriately caching web requests whose Last-Modified HTTP header changed, but their contents didn’t. And how do webservers generate their Last-Modified header value? That’s right, file mtime, the one that can be set by File.utime!

…but not the one set by GZip#mtime=. I cannot find any evidence anywhere that value, in the contents of the gzip file matters. Nada. All it does is make the gzip file’s contents be different, because of that one tiny value being included. I can’t imagine anything cares about the original mtime when it’s unzipped, that wasn’t already transmitted via the Last-Modified HTTP header. What am I missing?

Of the evidence I have, it seems like developers set GZip#mtime=… because it’s an option? I couldn’t find a reason in the Sprockets history. I noticed that Rack::Deflater does the same for reasons I haven’t figured out in their history either. This behavior probably is not busting a lot of content-based caches unnecessarily, but it probably does some. So maybe don’t do it unless you need to.

Everything I know about AI, I learned by reading the AWS Bedrock Client Ruby SDK code

This essay is a little bit about me and how I solve problems, and a little bit about AI from the perspective of a software developer building AI-powered features into their product.

The past week at my startup has been a little different. I spent the week writing a grant application for “non-dilutive funding to accelerate AI-enabled solutions that help governments respond to recent federal policy changes and funding constraints in safety net programs.” It wasn’t particurly difficult, as we’re already deep into doing the work 💅 …but it was an interesting experience breaking that work down into discrete 250-word responses, all 17 (!) of them on the grant application.

One of my friends is a reviewer panelist (she’ll recuse herself from our proposal), and I was explaining my struggle to find an appropriate level of detail. Comparing an answer like:

…we use AWS Bedrock models which are SOC, HIPAA, and Fedramp compatible, and integrated via its SDK which has robust guardrail functions like contextual grounding and output filters that we’re using to ensure accuracy and safety when producing inferenced text output…

And:

…we have robust controls for ensuring the safety and accuracy of AI-powered features…

That all might sounds like word salad anyways, so I compared it analogously to saying, in the context of web design:

… we’re designing our application using contemporary HTML and CSS features like media queries, and minimal Javascript via progress enhancement, to be usable and accessible across web browsers on devices from mobile phones to desktop computers….

And:

….mobile, responsive web design…

Working and communicating at the correct level of complexity is the work. While I’m developing software, I tend to be reductive; as the meme goes: I’m not here to talk. Just put my fries http in the bag, bro. My DOM goes in the bag. Just put my Browser Security Model in the bag.

I guess I have the benefit of perspective, working in this field for 20+ years. While things have gotten to layer-upon-layer complexity, I can remember what simple looks and feels likes. It’s also never been simple.

For example, in the civic tech space, there’s been lots of times where on one side someone wants to talk about civic platforms and government vending machines and unleashing innovation, and on the other side is a small room with vendor representative that is existentially opposed to adding a reference field to a data specification without which the whole system is irreconcilably unusable. The expansive vision and the tangible work.

I believe, at the core of all of this IT (Information Technology (or ICT, Information and Communications Technology as it’s known globally), we’re doing pushing Patrick: take information from one place, and we push it somewhere else.

Push it Patrick GIF

Take that information from a person via a form, from a sensor, from a data feed, from a process, and push it somewhere else. Sure, we may enrich and transform it and present it differently, and obviously figuring out what is useful and valuable and useable is the work. From the backend to the frontend, and the frontend to the backend. From client to server, from server to server, protocol to protocol, over, under, you get the idea. The work is pushing information somewhere else.

Anyways, about that AI…

From Brian Merchant’s Blood in the Machine newsletter, describing going to an AI retreat thing:

I admittedly had a hard time with all this, and just a couple hours in, I began to feel pretty uncomfortable—not because I was concerned with what the rationalists were saying about AGI, but because my apparent inability to occupy the same plane of reality was so profound. In none of these talks did I hear any concrete mechanism described through which an AI might become capable of usurping power and enacting mass destruction, or a particularly plausible process through which a system might develop to “decide” to orchestrate mass destruction, or the ways it would navigate and/or commandeer the necessary physical hardware to wreak its carnage via a worldwide hodgepodge of different interfaces and coding languages of varying degrees of obsolescence and systems that already frequently break down while communicating with each other.

I mean… exactly. Like what even.

From my own experience of writing that grant application I mentioned at the beginning of this post, and enumerating all of the AI-powered features that we’ve built already, are prototyping, or confidently believe we can deliver in the near-term future… it’s quite a lot. And it’s not that different from anything that’s come before: building concrete stuff that concretely works. I wrote something similar back in January too, so maybe this feeling is here to stay.

The places where I struggled most to write about was in how many places, about trust and safety and risk and capacity… was explaining how we’re using functions that are quite simply exposed via the SDK. AWS Bedrock is how Amazon Web Services provides AI models as a billable resource developers can use. The SDK is how you invoke those AI models from your application. Just put the method signature in the bag. It’s all documented: the #converse_stream method, pretty much the only method to use: no joke, has 1003 lines of documentation above it describing all of the options to pass, and and all of the data that gets returned:

  • Providing an inference prompt
  • Attaching documents
  • Tool usage, which is how models can coerced to produce structured output
  • Contextual grounding, to coerce the model to use context from the input rather than its foundational training sources.
  • Guardrails and safety filters, to do additional checks on the output, sometimes by other models.
  • …and all of the limitations and constraints that are very real and tangible. By which I mean the maximum number of items one can send in an array or the maximum number of bytes that can be sent as a base64-encoded string.

Every option is very concretely about passing a simple hash of data in, and getting a hash of data out. Just put the Ruby Hash in the bag.

To analogously compare this to one of the oldest and boringest AWS services, the Simple Storage Service, there is, with one hand, waving about how “the capability to store and retrieve an unlimited amount of data will change the world” and, and then with the other hand precisely “overriding the Content-Type of an S3 file upload”. Reading the method signature is the latter.

And I don’t mean to imply everything in that 1003 line docblock is all you need to know. But you might wonder, say “When might I want to get a resp.output.message.content[0].citations_content.citations #=> Array?” and then you google it and go down a rabbit hole to learn that citations are just another form of tool usage and sometimes the model won’t do it which if you keep digging down that rabbit hole everything becomes evident that these are, at heart, still probabilistic text generators that are useful and interesting in the same way S3 is useful and interesting, and also isn’t. It’s a totally different conversation.

So, if there’s any takeways to be had here:

  • This stuff is as boringly useful as any other AWS service is or isn’t, if you’re familiar with the vast number of AWS services.
  • It’s maybe embarrassing to write about in tangible form because it’s already been boringly commodified as a service through AWS.
  • …and also there are tangible, useful things to be built. And a lot of intellectual joy in breaking down how some high-level feature is built on top of these low-level services.

My self-serving interest here is that I’d love to talk to other folks who are building stuff in Ruby on Rails using AI and LLMs and inference about the boring stuff involved in taking information from one place, and pushing it somewhere else.

For example, yesterday I posted in the Ruby on Rails Link Slack #ai-lounge channel:

Anyone building AI-powered features into their application? I’ve got an interface for translating a text field into another language, and I was curious if anyone has a pattern they like with Turbo/ActionCable/Stimulus for streaming responses to a particular form for a single client (e.g. there’s not yet a model record that can be broadcasted from). This is what I’m doing (hopefully it makes sense 😅) …

…and I’m waiting for a response.

Consider Thruster with Puma on Heroku

To briefly catch you up to speed if you haven’t been minutely tracking Ruby on Rails performance errata: the Puma webserver has some mildly surprising behavior with the order in which it processes and prioritizes requests that are pipelined through keepalive connections; under load, it can lead to unexpected latency.

Heroku wrote ~3,000 words about this Puma thing, and very smart people are working on it. All of this became mildly important because: Heroku upgraded their network router (“Router 2.0”), which does support connection keepalive, which has the potential to reduce a little bit of latency by reducing the number of TCP handshakes going over Heroku’s internal network between their router and your application dyno. People want it.

When you read the Heroku blog post (all several thousand words of it), it will suggest working around this with Puma configuration like (1) disabling connection keepalive in Puma or (2) disabling a Puma setting called max_fast_inline, though I’m pretty sure this has the same effect in Puma as disabling connection keepalives too (last I checked there wasn’t consensus in Puma as to what parts of the related behavior were intended but surprising, and what was unintended bugs in the logic).

Anyways, there’s a 3rd option: use Thruster.

  • Requests on the Heroku network between the Heroku router and Thruster running in your application dyno can use connection keepalives (sidenote: I’m 98% confident Thruster supports keepalives because Go net/http enables keepalives by default and Thruster doesn’t appear to explicitly disable them)
  • Requests locally within your application dyno between Thruster and Puma can disable connection keepalive and there shouldn’t be any network latency for the TCP handshake because it’s all happening locally in the dyno.

No one else seems to be blogging about this—a fact pointed out when I suggested this in the Rails Performance Slack. So here ya go.

  1. Add the thruster gem
  2. Update your Procfile: web: HTTP_PORT=$PORT TARGET_PORT=3001 bundle exec thrust bin/rails server
  3. Disable Puma’s keepalives: enable_keep_alives false

I was already using Thruster with Puma on Heroku because of the benefits of x-sendfile support. If you’re worried about resource usage (because Thruster is yet another process) it’s been pretty minimal. I looked just now on one app and 13MB for Thruster next to 200MB for the Rails app running in Puma; seems tiny to me.

$ heroku ps:exec -a APPNAME
# ....
$ ps -eo rss,pss,cmd
  RSS   PSS CMD
    4     0 ps-run
11324 12792 /app/vendor/bundle/ruby/3.4.0/gems/thruster-0.1.14-x86_64-linux/exe/
 2960  1095 sshd: /usr/sbin/sshd -f /app/.ssh/sshd_config -o Port 1092 [listener
 2220   407 /bin/bash -l -c HTTP_PORT=$PORT TARGET_PORT=3001 bundle exec thrust
199336 187215 puma 6.6.0 (tcp://0.0.0.0:3001) [app]
 8316  1821 ssh -o ServerAliveInterval=30 -o ServerAliveCountMax=3 -o HostKeyAlg
 9172  6346 skylightd
 8244  1367 sshd: u16321 [priv]
 5548  1296 sshd: u16321@pts/0
 4444  1178 -bash
 4036  1964 ps -eo rss,pss,cmd

How to customize Rails I18n key suffixes like _md for Markdown

If you’ve had reason to use internationalization in Rails on Rails, you’ve probably used a nifty feature of it:

Keys with a _html suffix… are marked as HTML safe. When you use them in views the HTML will not be escaped.

Authoring HTML within translations can be a pain because HTML is quite verbose and easy to mess up when maintaining multiple versions of the same phrase, or paragraph, or page across multiple languages.

It would be nice 💅 to have something like this:

Keys with a _md suffix can be authored in Markdown and will be automatically converted to HTML and marked as HTML safe.

Markdown is a lot less verbose than HTML and easier to write and eyeball. Let’s do it!

First, we have to patch into the I18n translate method. It looks something like this:

# config/initializers/markdown.rb

module Markdown
  module I18nBackendExt
    def translate(locale, key, options)
      result = super
      # Rails missing key returns as MISSING_TRANSLATION => -(2**60) => -1152921504606846976
      if key.to_s.end_with?("_md") && result.is_a?(String)
        if result.include?("\n")
          Markdown.convert(result)
        else
          Markdown.inline(result)
        end
      else
        result
      end
    end
  end
end

ActiveSupport.on_load(:i18n) do
  I18n.backend.class.prepend Markdown::I18nBackendExt
end

Fun Fact: Rails does a clever thing to detect missing translations. I18n accepts a stack of fallback defaults, and Rails appends a magic number to the back of that stack: -(2**60) => -1152921504606846976. If a translation ever returns that value, Rails assumes that the translation fell through the entire fallback stack and is therefore missing. (It took me a bit of sleuthing to figure out what the heck this weird number meant while poking around.)

Second, we patch the Rails HTML Safe behavior to also make these strings HTML safe too:

# config/initializers/markdown.rb

module Markdown
  module HtmlSafeTranslationExt
    def html_safe_translation_key?(key)
      key.to_s.end_with?("_md") || super
    end
  end
end

ActiveSupport::HtmlSafeTranslation.prepend Markdown::HtmlSafeTranslationExt

That’s pretty much it!

If you’re uncomfortable patching things, Tim Masliuchenko has a gem called I18n::Transformers that makes it easy create custom key-based transformations. I believe you’ll still need to patch into the HTML safety behavior of Rails though—and anything involving marking things as HTML-safe should be always be scrutinized for XSS potential.

Here’s the full initializer I have, including how I get Kramdown to create “inline” markdown:

# config/initializers/markdown.rb

module Markdown
  def self.convert(text = nil, **options)
    raise ArgumentError, "Can't provide both text and block" if text && block_given?

    text = yield if block_given?
    return "" unless text

    text = text.to_s.strip_heredoc
    options = options.reverse_merge(
      auto_ids: false,
      smart_quotes: ["apos", "apos", "quot", "quot"] # disable smart quotes
    )
    Kramdown::Document.new(text, options).to_html
  end

  def self.inline(text = nil, **)
    # Custom input parser defined in Kramdown::Parser::Inline
    convert(text, input: "Inline", **).strip
  end

  module HtmlSafeTranslationExt
    def html_safe_translation_key?(key)
      key.to_s.end_with?("_md") || super
    end
  end

  module I18nBackendExt
    def translate(locale, key, options)
      result = super
      # Rails missing key returns as MISSING_TRANSLATION => (2**60) => -1152921504606846976
      if key.to_s.end_with?("_md") && result.is_a?(String)
        if result.include?("\n")
          Markdown.convert(result)
        else
          Markdown.inline(result)
        end
      else
        result
      end
    end
  end
end

ActiveSupport::HtmlSafeTranslation.prepend Markdown::HtmlSafeTranslationExt
ActiveSupport.on_load(:i18n) do
  I18n.backend.class.prepend Markdown::I18nBackendExt
end

# Generate HTML from Markdown without any block-level elements (p, etc.)
# http://stackoverflow.com/a/30468100/241735
module Kramdown
  module Parser
    class Inline < Kramdown::Parser::Kramdown
      def initialize(source, options)
        super
        @block_parsers = []
      end
    end
  end
end

Is everyone ok at the gemba

The following is the bones of a half-written essay I’ve had kicking around in my drafts for the past 3 years, occassionally updated. I recently read two things that said it all better anyways, but if you read through you get my perspectives as someone in software cooking the goose.

One: Albert Burneko’s “Toward a theory of Kevin Roose”:

My suspicion, my awful awful newfound theory, is that there are people with a sincere and even kind of innocent belief that we are all just picking winners, in everything: that ideology, advocacy, analysis, criticism, affinity, even taste and style and association are essentially predictions. That what a person tries to do, the essential task of a person, is to identify who and what is going to come out on top, and align with it. The rest—what you say, what you do—is just enacting your pick and working in service to it.

…. To these people this kind of thing is not cynicism, both because they believe it’s just what everybody is doing and because they do not regard it as ugly or underhanded or whatever. Making the right pick is simply being smart. And not necessarily in some kind of edgy-cool or subversive way, but smart the very same shit-eating way that the dorkus malorkus who gets onto a friendly first-name basis with the middle-school assistant principal is smart. They just want to be smart.

So these people look at, say, socialists, and they see fools—not because of moral or ethical objections to socialism or whatever, or because of any authentically held objections or analysis at all, but simply because they can see that, at present, socialism is not winning. All the most powerful guys are against it. Can’t those fools see it? They have picked a loser. They should pick the winner instead.

Two: Ed Zitrain’s “Make fun of them” (emphasis in the original):

In my opinion, there’s nothing more cynical than watching billions of people get shipped increasingly-shitty and expensive solutions and then get defensive of the people shipping them, and hostile to the people who are complaining that the products they use suck. 

In the day to day

One of the standard questions in my manager/executive interview kit is:

Walk me through what a good day looks like for you if this were your ideal job? And based on past experience, walk me through a bad day? (yes, this is described in the Phoenix Project)

With some prodding, I want sus out how they think about a mix of group meetings, 1:1s, and heads down time. And ideally that the candidate can articulate some concrete artifacts of work (canned meetings, documents, etc.).

  • An excerpt of a good answer: Promoting someone up a level is really satisfying. Being in a calibration meeting where I’m presenting the packet my report and I developed together. I’ve designed promotion processes before and building an agenda for that meeting is a lot of fun. Do you have a career ladder here? I spend a lot of time doing gap analyses. I’ll spend at least a few hours every week running through my notes.
  • An excerpt of a bad answer: Promoting someone up a level is really satisfying. It’s important people are recognized for their work.

Good answers usually have jumping off points to talk about working and communication styles: “oh, is that something you’re doing over chat or email or in a shared document? Is that a repeating thing or as needed? How would you pull that together?” Bad answers usually stay at the general level (async, mastery, autonomy, meaning, etc.) and just… stop.

Having done maybe 30 of these interviews over the past decade, I’ve realized there are many people who seem otherwise competent but can’t talk, concretely, to what they do. Physically. Embodied. Even at a computer, what’s behind that digital window.

And I say “seems competent” cause, well, I usually pull these questions out at the end of the interview pipeline, and the candidates are otherwise qualified and their previous interviewers liked them enough to advance them to this stage. And even when the company has gone on to hire them, over my objections sometimes based on this question, they haven’t been the worst. The candidate I interviewed with the most memorably bad answers is now an SVP of Engineering at a major tech company. They’re doing ok.

But I do think there’s something there, that’s indicative of the moment. To break it down, there’s two awarenesses that I’m checking for:

  • Materiality: an awareness of where they are doing the work, and that’s also sorta doublechecking that they are aware that other people actually exist too. You read enough Ask a Manager and you realize a lot of powerful people struggle with object permanence when someone is outside their sight lines.
  • Operationalization: a set of personal playbooks for making things happen. For example, I’m a big fan of skip 1:1s (when you meet with your report’s reports, or your manager’s manager) and will make point of intentionally setting those up. I have lots of opinions about what a minimally-viable-career-progression system looks like: career ladders and performance evaluation processes and calibration meeting agendas and 1:1 templates. Or more discipline specific, like inventories and gapping templates and decision docs, In any job we don’t have to use mine but I sorta expect an experienced manager to have them in their back pocket and be interested in talking about them.

All of which is to ask: take me to your gemba, ideally, and help me understand how it differs from your worst one too. The Gemba being the location where the work happens. Pedantically, it’s where the value is actually created, like the factory floor, but in this knowledge-heavy work… who can say? Our most valuable assets go home every night, right?

The AI in the Room

All of this comes to mind with the contemporary exhortions of like “AI is mandatory” and “you must use AI in your job” sorts of manifestos and the reply-guys of like “you either git gud with AI or you fall behind and end up living in a cave and eating bats.”

So I take the previous thought of like “lots of managers and executives have no idea what their own work actually looks like”….

…and my thoughts about my own discipline: how does software get made? Nobody knows. On the individual level, it’s extremely rare to find people doing anything like Extreme Programming and its emphasis on pair programming and rigid collective team practices. In most of my decades of professional experience, software just expected to happen. Nobody knows.

For example, most teams I’ve worked with have huge differences in how individuals approach a problem: what and how much design or plannng they do up front, whether they start with tests or implementation, the order of components they work through, what they consider “done”. Drill down to the actual hands-on-keyboard-and-eyes-on-screen and editors and IDEs and development tooling are all over the place developer to developer. And no practices for sharing or learning from each other, and rarely interest either (“it works for me and I expect it would be painful to change”).

I have to imagine there’s a relation here, more often than not I’m talking to software managers and executives. Shared practices just aren’t a thing.

So I’ll simply say: it’s weird that AI is the thing to mandate, rather than like a consistent IDE, or testing strategy, or debugger workflow. That this is the thing, when there is so much everything-else that nobody knows.

Accountability kayfabe

I’ll admit it’s easy to take potshots at the weird things tech executive say and do, but I see a pattern here. Just prior to these AI mandates were the layoffs, which had their signature phrase and power pose: “I’m accountable for this decision.”

“Accountability” is a funny word as it means to “give an account.” Y’know, explain what happened, what was done, when, and by whom. What’s funny is that the word has been sort of walked back from actually giving that explaination, to the idea of the burden of having to give that explanation, to just a vibe of like “I’ve got it. This one’s on me.”

I noticed that a lot. I’m not the only one.

I think the thing that people wanted to know, employees especially, was just like: materially and operationally, what the hell happened here?! And when there’s not an answer, there is a reasonable spectrum between active gaslighting on one side and my recognition that the people in charge could actually have no idea and maybe not even the personal capacity to know. It just ended up that way. Things happened.

Bringing it back around

I dunno. Just continue asking the “can you show me that?” “can we look at it together?” “how do you think that will effect things?” “is there anything you have in mind that I can do to help?” questions.

Recently, June 29, 2025

  • We have a new fridge; it is the same model as the old fridge because only that model would fit in the cabinetry. The installers also discovered that the water valve was broken and couldn’t be shut off; subsequently, the plumber determined that only the handle had snapped. I ordered a completely new water valve to unscrew its handle and attach that handle to the existing valve. In this economy.
  • This week in Rails, I went back and replaced most of the places I was using turbo-broadcast-refresh and replaced them with targeted turbo-streams. I also spent a bunch of time trying to make an autogrowing textfield that didn’t bounce the page up and down which the style.height = auto; style.height = scrollHeight-strategy does with Bootstrap; this was the result.
  • I’m committed to RubyMine Junie over Cursor for AI-assisted coding. I think Cursor does ever-so-slightly better with generated code and the prompting UI, but RubyMine is so far beyond for everything else. I keep sharing this on Reddit, so here’s my agent guidelines that I symlink into wherever the tool wants it.
  • I’m still reading The Future of Another Timeline. And I started playing Satisfactory.

Older posts