Meta

My New Blog

I’m starting a blog (again)! Why do I want to have a blog?

  1. I used to write. I was reasonably good at it, I think. I’m rusty now. I spent the past decade or so focusing on other things, but I always told myself I could get back to writing someday if I wanted to.
  2. I want to document things I know and learn and think. I want a place to point when someone asks me questions like “What kind of CO₂ monitor do you use and do you like it?” or “Why do you think contra dances should have mixers?”. I want this for sharing, but I also want it for Future Harris’s reference. My brain is leaky. Hopefully my blog is not.
  3. I love a technical playground. Professional work can be limiting, especially as I do more management and less code as my career develops. I’ve been a web developer and designer most of my life and I enjoy it. This can be more than just a place for me to write. My blog can be a place to experiment with different technologies and aesthetics, unconstrained by professional concerns.
  4. I can control my own content. I post plenty of trivial thoughts and day-to-day tidbits on social media, but, especially with work I put effort into, I’d like to preserve it in its best form. I don’t want to depend on third-parties to make it easy to find and access. I want to know that – if I choose – I can keep my writing online for 20 years or more. I’ll likely follow a POSSE approach here¹.

So, I’ve created this blog. I’ve stuck it in a design that I think is cute and the right amount of nerdy and that will probably change over time. I’ve played with some new-to-me technologies in the process (more on this later, probably). I’ve imported some older content from social media and I’ll continue to plumb my social media profiles and older blogs for things worth sharing. For a while, I expect this blog will grow in two directions – out into the future and backward into the past.

I’ve had blogs in the past, on and off. I’ve maintained them with greater and lesser success at times and I make no promises about how well I’ll do with this one. I will try to keep my FILDI² strong. We’ll see how it goes.

  1. Hat tip to Jeff for pointing out the relevant acronym to me as well as being a blogging inspiration. ↩︎

  2. Fuck It Let’s Do It – as coined by aughts vlogger Ze Frank. ↩︎

Miscellany

Recommendation: Appliance Cable Organizers

Here’s a small purchase that’s been bringing me some satisfaction lately:

I find it really annoying when an appliance comes with a long cord and no way to store it. Especially if it’s an appliance I find myself moving frequently (like my kitchen mixer) or even if it’s one where I only move it occasionally but it’s really awkward when I do (like a window AC unit).

I found these stick on cable organizers that solve this problem for me! These are the particular ones I purchased but I’m sure there’s lots of similar options on and off of Amazon.

A white box fan with a white silicone cable organizer adhered to the side and the power cable coiled around it. Back of a red KitchenAid stand mixer with a black silicone cable organizer adhered to the back and the power cable coiled around it. A gray boxy HEPA air purifier with a gray silicone cable organizer adhered to the back and the power cable coiled around it.

Software

Generative AI Possibilities and Adobe Firefly

I’ve critiqued conversations about new “generative AI models” for the past year as often having limited imagination about what these models can do. People tend to assume what they’re being used to do now is what they will continue to be used to for in the future. Specifically “enter a prompt, get an image” for image models (like Midjourney and DALL-E) and “chatbots” for LLMs (like ChatGPT).

I think this has really limited conversation, because, for example, it’s easy to see “enter a prompt, get an image” as a way to replace artists and discuss the technology on those grounds. But I’ve figured that these models aren’t going to be used in that specific way primarily for long—they’re more likely to become components in the artistic process, making certain things easier, lowering the barrier to entry to visual art, and making it trivial to accomplish certain visual effects that were previously inaccessible or challenging, much like modern cameras make certain types of photography trivial that were challenging a couple decades ago.

(I think something similar happens in conversations about ChatGPT. People see chatbots, but they don’t see “cheap classifier,” “image alt text writer,” “document summarizer,” or “style guide adherence checker.” Some people see “code assistant”—partially because those products already exist—imo one of the most productive places we’ve found to deploy LLMs so far, already benefiting my work personally.)

Anyway, here’s the first tool I’ve seen based on the same general image generation technology as Stable Diffusion/DALL-E/Midjourney that’s poised to go mainstream. It’s from Adobe, baked into Photoshop, trained exclusively on image sets that Adobe has rights to, and to me feels like a natural successor to the content-aware fill functionality that visual artists have already used for a decade+ already. Very curious to see how it will be received.

Archive: ten posts →