<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.frank.computer/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.frank.computer/" rel="alternate" type="text/html" /><updated>2026-03-10T18:51:26+00:00</updated><id>https://www.frank.computer/feed.xml</id><title type="html">Frank Elavsky</title><subtitle>PhD at Carnegie Mellon University investigating accessibility and data interaction. Previously: data interaction R&amp;D at Adobe, Highsoft, Apple, Visa, + others.
</subtitle><entry><title type="html">On genAI: Was prototyping really a bottleneck?</title><link href="https://www.frank.computer/blog/2026/03/prototyping-bottleneck.html" rel="alternate" type="text/html" title="On genAI: Was prototyping really a bottleneck?" /><published>2026-03-09T00:00:00+00:00</published><updated>2026-03-09T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2026/03/prototyping-bottleneck</id><content type="html" xml:base="https://www.frank.computer/blog/2026/03/prototyping-bottleneck.html"><![CDATA[<p>(This post is a part of my new mantra moving to Cal Poly SLO: “move SLO and repair things,” which is in direct tension with the mantra from the tech industry “move fast and break things” as well as a play on the abbreviation of San Luis Obispo, where I will soon work. I want to cultivate a research culture and lifestyle focused on maintenance, careful deliberation, and <em>care</em>, as defined in the feminist sense of the word.)</p>

<hr />

<p><br /></p>

<p>Ever since ChatGPT, folks have frequently remarked something along the lines of, “LLMs are so fast, now we can easily scaffold prototypes! Finally, the bottleneck is gone!” <em>Bottleneck?</em> Was the problem with prototyping the fact it took too long? Some nerd (Microsoft guy maybe?) said the even-more-ridiculous “finally the bottleneck from typing is gone” as if the speed of typing is what was holding back new ideas and features and improvement and so on.</p>

<blockquote class="warning">
    <p>⚠ <b>Warning</b>:</p>
    <p>This blog post is a response to a post on LinkedIn. The original may someday be lost in time! For now, the <a href="https://www.linkedin.com/posts/ebertini_i-am-convinced-we-are-in-a-new-era-for-visual-share-7436798122770653184-CZpP?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAADDAwBkBOdoW11I9B5DHy57VfR5jIs33Kq0">post is here, by Enrico Bertini</a>.</p>
</blockquote>

<p>Enrico writes,</p>

<blockquote>
  <p>“I am convinced we are in a new era for Visual Analytics (VA) (i.e., visual interactive data interfaces), but I am not sure we have yet realized it.</p>

  <p>In VA, building and refining a prototype has always been a costly bottleneck. This is no longer true. A single skilled person can build a working prototype in a matter of hours. They can also thrash it and rebuild it within hours. Heck! They can even build three alternatives and show all of them in maybe 1-2 days. This is no small change. It’s massive.</p>

  <p>How do we deal with that?</p>

  <p>All of a sudden, an idea in my head can be transformed into a tangible prototype very quickly. This has enormous implications for applied research. If I work with a partner in a specific domain, I can quickly show something rather than talk. If I am good enough, I can even build something they can use pretty quickly.”</p>
</blockquote>

<p>I don’t want to interpret Enrico’s words too harshly or in an ungenerous way (but I could, if this was all I saw!). I think Enrico has remarked in the past on many occasions that genAI is creating new challenges. And I want to unpack some assumptions I see here in this post, as well as assumptions (and dangerous ones!) that I’ve seen many other people express.</p>

<p>One of the main challenges, before I really jump in, is that building something quickly can be good <em>if you know exactly what you are doing</em>. And this is the danger! Students haven’t learned all kinds of things about software and how to think about users, human cognition and perception, and more. And non-technologists are often unopinionated about the detailed design and engineering intricacies that someone who is more-experienced in technology comes to consider.</p>

<p><strong>But people tend to assume that the ideas they have in their head are really good, if they aren’t used to rigorously iterating on ideas.</strong></p>

<p>Sometimes… slowing down is good for <em>you</em>, the builder! And sometimes going fast is awesome, but you must absolutely know what you’re doing. (And at that point, genAI is very-rarely much of a speed-boosting tool. GenAI users tend to spend far more time checking output and logic <a href="https://arxiv.org/abs/2507.09089">compared to when they write code themselves</a>.)</p>

<p><em>So, consider this blog post a response to the least-generous interpretation that one might have of the world around us; peering at the worst-case scenario uses of genAI and not just the optimistic ones.</em></p>

<hr />

<p><br /></p>

<figure style="display: block; width: 90%; margin-left: auto; margin-right: auto;">
    <img src="https://www.frank.computer/images/slopzone.png" alt="The prototype slop zone by Frank Elavsky. Diagram. There are two axes. X axis is how technically refined an artifact is, starting low and increasing. Y axis is how intellectually refined an idea is, starting low and increasing. In the upper left (high idea refinement, low artifact refinement), an annotation says, 💡 what users want gnAI for (for their brilliant ideas), an arrow points to an area labeled (zone of missing skills). The in the bottom right an annotation says what genAI enables 🫠. Low fidelity prototypes fill the left side near the bottom (low technical refinement and ideation), mid-fidelity in the middle of the diagram, and high fidelity fills the upper right (high ideation, high technical refinement). A tiny slice in the top right corner (maxed on both axes) says no longer a prototype." />
    <figcaption>If you believe that prototypes are merely an idea-first-made-real, rather than prototypes representing a means to refine the ideas they represent, this helpful diagram might help you realize a few things (namely that folks often assume what they lack are skills, when they often lack skills AND well-refined ideas).</figcaption>
</figure>

<h2 id="is-iterative-ideation-itself-a-bottleneck">Is iterative ideation itself a bottleneck?</h2>

<p>In some contexts, slow prototyping is probably a “bottleneck,” sure. But prototyping is central to generating an understanding of the thing you’re making, questioning your assumptions, and trying to communicate the rawest components of your ideas. None of those inherently get better if you do them faster.</p>

<p>In terms of prototype “fidelity” (understood as “<a href="https://www.frank.computer/blog/2024/01/what-is-a-prototype.html">faithfulness</a>”) I reckon we need to now consider: <em>whose</em> ideas is this prototype faithful to? Is this prototype faithful to the original human creator, or are there embedded biases in this prototype’s function and construction because it was made using agentic tools/modern models?</p>

<p>An “unfaithful” prototype now has a new meaning, because we can more easily create prototypes that <em>appear</em> to be high fidelity, but may not actually be <em>faithful</em> to the intricacies of any particular idea at all.</p>

<p>This line below, in particular, I think <em>does</em> have enormous implications for applied research, but I wouldn’t necessarily assume that the implications are all good:</p>

<blockquote>
  <p>All of a sudden, an idea in my head can be transformed into a tangible prototype very quickly. This has enormous implications for applied research.</p>
</blockquote>

<p>I agree that a shift is taking place. But I don’t think we should just move forward as if the act of slowly prototyping itself was a problem to solve. I would argue that the act of prototyping (<em>especially</em> when done slowly) is still valuable as a necessary step we take as creators, attempting to make something new. (As a note, I get disappointed with the view that a “prototype” is the same as “a first attempt to create something” and not “an artifact I use to reflexively to discover+congeal my own ideas and communicate with.”)</p>

<h2 id="is-faster-communication-better">Is faster communication… <em>better</em>?</h2>

<p>I’ve written about prototypes before, and what I think much of our literature misses on the subject: that prototypes are actually a tool for <em>communicating</em>. (<a href="https://www.frank.computer/blog/2024/01/what-is-a-prototype.html">This blog post of mine on prototyping</a> was my most-visited post, before <a href="https://www.frank.computer/blog/2025/05/just-a-tool.html">my post on AI + Tools</a> came out. Funnily enough, <em>this</em> post I’m writing right now seems to be marrying the two.)</p>

<p>The fact that we can build functional symbols communicating <em>faster</em> isn’t necessarily good, if we don’t have a firm grasp of the idea itself in our head in the first place. One of the best parts about prototyping, to me, are the slowest parts: you draw something out with a pencil and paper, you cut out pieces of cardboard, you use some lego bricks to scaffold something, you get up to a whiteboard with a few other people and start squeaking your pens across the blank canvas together… <em>and you talk about that stuff with each other.</em></p>

<p>“Prototyping” is <em>not</em> a linear pipeline that goes from an idea in your head into a material artifact that represents that idea.</p>

<p><strong>Prototyping is a method of symbol-making. And we use those symbols to communicate ideas.</strong></p>

<p>And this is why I am actually worried about prototyping that moves <em>too</em> fast. And in fact fast-prototyping-as-a-problem isn’t new! Researchers who studied “fidelity” in prototyping have long discussed how moving too quickly to higher fidelity prototyping can bias your future ideas, bias your outcomes, and ultimately produce <em>more</em> costs, not less, down the road (see this <a href="https://dl.acm.org/doi/pdf/10.1145/223500.223514">seminal summary of the great older battles on fidelity by Rudd, Stern, and Isensee</a>).</p>

<h2 id="are-genai-prototypes-anti-social">Are genAI prototypes <em>anti-social</em>?</h2>

<p>So back to Enrico’s argument here… I’m not sold that faster = better and <em>certainly</em> not sold that faster = better = cheaper (specifically responding to the sentiment in his post that it has “always been a costly bottleneck”). I think that existing literature would argue quite the opposite. Faster = 1) more assumptions are missed in the core ideas, 2) ideas that appear already-refined become immune to questioning, and 3) faster, more-refined prototypes are ultimately less social.</p>

<p><em>Yes!</em> You heard me: Faster prototypes mean <em>anti-social</em> prototypes. This final point (which I am making, not necessarily something I’ve read in the literature on this) is actually the most important one to make (which I’ve already brought up earlier): but that <strong>prototypes are about symbol-making for the purpose of communicating</strong>. If you were the only person left alive on this earth, but you still learned how to speak, you would likely not lose your internal dialogue. You would still speak, even to yourself, to think through things. And prototyping, as an act and process (not a destination), is about that symbol-making. You’re working through whatever idea needs forming and needs new symbols for expressing. Prototypes are a form of self-socializing ideation. And their inherently symbol-rich, generative nature makes them ideal for socializing with others.</p>

<p>Enrico acknowledges the power of socializing in his post:</p>

<blockquote>
  <p>If I work with a partner in a specific domain, I can quickly show something rather than talk. If I am good enough, I can even build something they can use pretty quickly.</p>
</blockquote>

<p>And here is the danger: sharing something with others that is <em>too polished</em> is exactly what existing literature on prototyping warns about! You’re sharing an over-developed set of assumptions! People mostly give feedback on little cosmetic details once you show them something pretty far along. And this behavioral barrier can be overcome with a good conversation partner (who might understand that what you are showing them is potentially throw-away), but this is unsupported by current literature that observes how people interact with prototypes (that generally people, if unprompted, will give less fundamental, high-level critique and engagement if the prototype is more-polished).</p>

<p>Inviting someone to write a language with you is different than asking someone to critique a sentence you wrote. What you want, when socializing a prototype, are conversation partners who want to question your core assumptions about whatever problems you believe exist and are trying to solve. Prototyping that is low fidelity (and slower) invites this language-crafting level of engagement with peers. Higher fidelity (with LLM assistance or otherwise) invites a copyediting level of feedback. We <em>want</em> socialization on the core symbols and ideas we choose to construct! And that might always have a <em>human</em> bottleneck built into it (for as long as we humans are in charge of prototyping, that is).</p>

<p>But just to drive this home: <strong><em>socializing your idea</em> is the real value of a prototype</strong>. Socializing a raw idea is a fundamental epistemic activity that we do. And I honestly wouldn’t like ideation, as a human act, if I only ever did it alone or with a machine that simply confirms my biases and listens to my instructions. Refining, discussing, sharing, mixing, appropriating, and fusing ideas are actions that really only come into the picture once we have a prototype in between a few people who are trying to figure something out together.</p>

<h2 id="socially-constructing-meaning-is-slow-and-good">Socially constructing meaning is slow (and good!)</h2>

<p>But <em>why</em> is socializing so important? Because our intelligence is <em>positional</em> and <em>situated</em>. We have “horizons” of knowledge, and limits, as singular beings. We are not objective. Yet through social inclusion in the act of making and forming ideas, we broaden the angles and horizons of our social constructs. And for this reason, prototypes that are socialized are simply better, more meaningful creations. I’d even argue that a prototype that isn’t socialized isn’t a prototype at all, it is just an early attempt at making something: the focus is on the <em>object</em> being made, not the idea it represents. And it’s that human socialization and collective meaning-making that turn an artifact-in-process into a prototype that is refining something outside of itself: an <em>idea-in-process</em>.</p>

<p>In “<a href="https://hci.stanford.edu/courses/cs247/2012/readings/WhatDoPrototypesPrototype.pdf">what do prototypes prototype?</a>”, Houde and Hill argue that the separation of artifact and idea is key to understanding the value of prototyping: the prototype isn’t the idea itself, it is just a symbol of the idea (hence, why I write about why fidelity = faithfulness and not something like “quality”).</p>

<p>I worry about prototypes that are <em>too</em> refined <em>too</em> soon. Will they alienate our ideas, because our ideas are less inviting for critique/reshaping at fundamental levels? (Isn’t it ironic, that using a <em>language model</em> possibly holds us back from building a truly social new set of symbols for communicating an emerging, shared space of thinking?)</p>

<p>A good prototype can be made quickly, but it <em>needs</em> to be a little bit shitty, too; it’s basically a requirement. And in terms of pure speed, the fastest prototypes I’ve ever built are still far faster than an LLM-prompted mini-app and contain far fewer baked in patterns and assumptions about what I’m trying to accomplish. The humble pen and paper, cut out pieces of paper, and lego brick styles of prototyping are all undefeated ways to build a shared set of symbols about a new idea with someone. Plus, they’re fun to collaborate with, as a design material! Until we can have easy to use, small, modular, generic, programmable soft and hard materials, doing things with smoke and mirrors and little trinkets is still my favorite way to do things.</p>

<p>Anyway, this blog post is a largely-unstructured brain-dump in response to a linkedin post. Thanks to Enrico for posting it! It certainly got my juices flowing.</p>

<p>And it <em>does</em> seem ironic to me to socialize the idea that prototyping=faster=cheaper=good (that all of these things are true and related). That seems like an idea that has been developed <em>too far</em>… it has too much fidelity, not enough prototyping. Perhaps I would recommend going back to the drawing board and re-assessing this?</p>

<hr />

<p><br /></p>

<h2 id="bonus-take-the-psychology-of-faster--better">Bonus take: the psychology of <em>faster = better</em></h2>

<p>Now, the real spicy take of mine (imagine the armchair psychologist within me saying this): I don’t think the “bottleneck” that existed was any more than simply manufactured <em>human impatience</em>. We don’t want fast because fast is actually better or cheaper. We want fast because someone convinced us that fast <em>feels more secure</em>. We have been told that fast feels <em>stronger</em>; feels <em>productive</em>. Fast makes the line go up and lines going up is a good thing.</p>

<p>But <em>fast</em> is affective and subjective more than it is objectively good. We have socially constructed why time matters and by extension we have socially constructed why faster things matter. And so the objective speed of something is actually secondary to our perception and understanding of speed. As anxiety-ridden animals, we have invented the want for things that <em>appear</em> fast, even if (by actual measurement), <a href="https://arxiv.org/abs/2507.09089">things can be just the same speed or even slower</a>.</p>

<p>We are taught not to like that our ideas and symbols are in-process, fragile, and flimsy… <em>and that we have to sit with that reality for a while before making something good and meaningful</em>. We want our ideas and symbols to appear as-congealed as possible as quickly as possible. Binary gender suffers from this problem! “Man” and “woman” are so strong and congealed and total and without nuance. Yippe! We are safe from fluidity now, so long as we ascribe to a gender binary!</p>

<p>And so large-language models give us <em>performative fidelity</em>; a sort of false, constructed faithfulness to ideas we haven’t actually invested enough time into and haven’t socialized. If anything, they construct loyalty to under-formed ideas more than they actually construct ideas, because constructing ideas is slow and messy!</p>

<p>Large-language models then simply become yet another social function of anti-fluidity, seeking to box up a thought as soon as we have it and transform that thought into a commodified, less-flexible unit, which we can claim is an “idea” that has been “prototyped” (without any symbol-making and socialization necessary!). This brain-fart-to-built-UI pipeline is so fast that it might actually begin to convince people that their lightly-sautéed neuron-events are actually really well-thought-out, mature concepts like slapping sheepskin on a lamb.</p>

<p>“Prototyping is faster now” is yet another example of the make-you-feel-less-insecure style of marketing that every modern company dreams of taking advantage of. Marketing <a href="https://www.smithsonianmag.com/history/how-advertisers-convinced-americans-they-smelled-bad-12552404/">sold us the need for deodorant by stoking human insecurity</a>, and now it’s working on the impatience and insecurities of software engineers, too.</p>]]></content><author><name></name></author><category term="prototyping" /><category term="design" /><category term="software" /><category term="llms" /><category term="large-language models" /><category term="ai" /><category term="agentic models" /><summary type="html"><![CDATA[I keep hearing folks claim that the fact we can 'prototype' so quickly now is a good thing (thanks to modern genAI). But what if the slow parts about prototyping are actually what makes it worth doing?]]></summary></entry><entry><title type="html">5 reasons why I didn’t choose an R1</title><link href="https://www.frank.computer/blog/2026/02/say-no-to-R1s.html" rel="alternate" type="text/html" title="5 reasons why I didn’t choose an R1" /><published>2026-02-18T00:00:00+00:00</published><updated>2026-02-18T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2026/02/say-no-to-R1s</id><content type="html" xml:base="https://www.frank.computer/blog/2026/02/say-no-to-R1s.html"><![CDATA[<p>I cast a wide net this job market season. I wasn’t sure where the jobs would be, and I was pretty confident that things would be competitive (since the state of industry means more people who are on the fence about industry might become tempted by faculty positions). And the state of the US means that faculty positions overseas, I reckoned, would be especially competitive.</p>

<p>But I’ve been planning to get into academia since I left industry in 2021, so I wanted to really examine all of the pros and cons of the different styles of academic life I could end up living.</p>

<p><strong>Research staff at an R1:</strong> I had already done research staff work back in 2017 and 2018 at Northwestern, and I wasn’t interested in that lifestyle. It didn’t pay great, but also had no job security (I survived a big culling they did in 2018, but that motivated me to look for employment elsewhere - if the pay is lower than industry but I’m still treated as expendable, like an industry job, then why stick around?).</p>

<p><strong>Research scientist at an R1:</strong> I could have looked for the few “research scientist” roles, at places like CMU, Stanford, and so on, where you just do research 24/7 and have your own grants and agenda, but the problem with those is that you have to have a very high output level before starting, which I don’t have. And pretty much all of the openings that I saw explicitly prioritize AI/ML research over everything else (because, of course, that’s where the money is at 🙄).</p>

<p><strong>TT research faculty at an R1</strong>: Now, the tenure-track (TT) research faculty roles at R1s are tempting to me. You get tenure. That rules. And the bar for research output is lower than a pure research scientist role. And the bonus? You are expected to teach a little bit. (I love teaching.) This all seems great! And I’m sure that in an optimal environment, an R1 could be a good fit for me. However… let’s break down a few reasons why this kind of work isn’t ideal for what I want:</p>

<h2 id="5-reasons-that-tt-research-faculty-roles-at-an-r1-arent-ideal-for-me-personally">5 reasons that TT research faculty roles at an R1 aren’t ideal for me, personally</h2>

<h3 id="1-funding">1. Funding</h3>

<p>The biggest reason that R1s became less of a priority to me happened due to all of the cuts in 2025. I watched the folks who do accessibility research at UW, UC Irvine, U Mich, and many other places lose funding. Stacy Branham, one of the people I look up to the most in our little corner of the world of research, who was the co-PI of the single-most impactful grant in accessiblity research in computer science (called AccessComputing), <a href="https://www.linkedin.com/posts/stacybranham_nsfs-proposed-budget-to-congress-for-ay-activity-7334812420428713985-xdAt?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAADDAwBkBOdoW11I9B5DHy57VfR5jIs33Kq0">announced last year on Linkedin</a> that the grant did not survive federal cuts.</p>

<p>Research faculty positions at R1s are expected to produce significant amounts of research. If you look at the incredible career of <a href="https://scholar.google.com/citations?user=mhmvCgsAAAAJ&amp;hl=en&amp;oi=ao">Jen Mankoff</a> (my absolute favorite person who does accessiblity/assistive tech/disability studies research), as an example, she has authored over 200 papers in the last 2 decades or so. In order to be this productive, you have to advise PhD students, whose entire “job” (aside from taking classes) centers on a joint pursuit of your research agenda and their own.</p>

<p>Research faculty <em>need</em> PhD students. And they’re <em>very expensive</em> because of how higher ed works: admin takes a big cut of all your grant money, and then you also have to pay significant premiums for the cost of your PhD student’s education. Some PhD students cost as much as 140,000 USD per year!</p>

<p>But with fewer opportunities for funding, especially since my entire agenda centers on <em>inclusion</em> of people with disabilities in data science work (which is an area the current federal administration has obliterated), that means that I will be under significant pressure to put out papers but might lack the students to help me accomplish my goals. I need to get tenure! But fewer grants means a worse tenure package and fewer grants also means fewer students, which means a lower output level. Not good! I don’t want to work 60, 70, 80 hours per week just to compensate for the state of things, most of which are beyond my control.</p>

<p>And the worst part about having fewer funding opportunities? I would need to consider if my soul was worth selling out… should I look to defense funding? Should I bring the claws of corporate capitalism deeper into the academy than it already is? Mols Sauter shook me up last year, essentially letting me know that my existing corporate relationships make me out as someone basically already on the road to being an academia-destroying poser. (Mols was much nicer than this, of course, but the reality is that I’ve already demonstrated I’m willing to partner with powerful companies who have profit-based incentives in order to do my work. This does, to some degree, participate in the “corporate capture” of academia, threatening the spirit of <em>true</em> research activities.) And as an aside, Amy Ko (who I look up to immensely) had called me “very corporate” at SIGCSE’s workshop on accessibility just about 1 month and a half previous. I basically evaporated when she said this to me. (I needed a little self-awareness awakening, in this regard.) So by the time Mols brought into question my tenuous relationships with industry, I was already reeling hard from impostor syndrome and self-doubt in terms of both how “pure” I am as a researcher and the actual ethical impact that alternative funding has had on academia.</p>

<p>And, as a little extra anxiety here: If I wanted to get a grant in accessibility… I now have to compete against the Stacy Branhams and Jen Mankoffs of the world, who are all also trying to get these big-dollar funds to help support their now-unfunded or under-funded armies of PhD students. I’d actually <em>feel bad</em> getting big grants, because there are real PhD students who need the support <em>right now</em>, as opposed to theoretical, future PhD students I’d hope to be able to support.</p>

<h3 id="2-underpaid-workers">2. Underpaid workers</h3>

<p>And speaking of PhD students… they’re grossly underpaid! I actually don’t recommend that anyone does a PhD unless they’re already unionized. CMU has been horrible and our 2 attempts to unionize have failed. I still believe in the cause, but the US Department of Labor is now in the hands of a very anti-worker set of people, whose agendas will certainly smash any new union efforts. It’s going to be very hard to get a recognized union built in the next 3 years because of Trump and his ilk.</p>

<p>So, I have to ask myself… do I really want to start a pyramid scheme? Especially one where the lower-order members of the pyramid aren’t unionized? If I <em>actually</em> wanted to run a top-down capitalistic scheme, I’d hope that my minions at least had fair pay and bargaining rights.</p>

<p>I actually left industry because of a closely-related issue here: the only clear path <em>forward</em> in my career was “up.” ICs (“individual contributors”) hit obvious ceilings of pay and freedoms, while people who take managerial paths forward end up continuing up and up in perpetuity.</p>

<p>But I don’t really like “managing.” I’ve even written extensively about how LLMs are essentially <a href="https://www.frank.computer/blog/2024/06/llms-and-thoughts.html">the perfect fantasy technology for managerial-aspiring people</a>: you go from creating things to managing them. You cease to be a chef when you order from a menu. You cease to be an artist when you commission a piece. And I enjoy the act of research, building, making, and so on.</p>

<p>I don’t really want a pyramid scheme in my future research lab, but without material equity, any culture or “vibes” I try to set won’t prevail. PhDs need fair pay, fair treatment, and… I also want to get hands-on with the work sometimes. (<a href="https://scholar.google.com/citations?user=vlgs4G4AAAAJ&amp;hl=en&amp;oi=ao">Jeff Heer</a> does this with projects like Vega and Mosaic, and I think his style of output would be entirely what I’d love to do someday, too. A mix of teaching/mentoring/advising… which is inescapably hierarchical due to a differential in expertise and experience, but also a mix of just doing things yourself, too, when you need to scratch the itch.)</p>

<h3 id="3-shallow-evaluation-of-productivity">3. Shallow evaluation of productivity</h3>

<p>This does segue into my next topic: <em>how</em> I work plays a huge role in whether or not I hit tenure. I have been speaking to Niklas Elmqvist over the last couple years, who has repeatedly offered me incredibly insightful wisdom. He has been generous sharing wisdom on general European vs American academic culture, housing, work-life expectations, and a whole lot more. But he also let me know that European assistant professorships often expect more experience than just a PhD before starting. This is mostly because their PhDs are only 3 years, which isn’t quite enough time to really build out a research profile. So, it is common to expect a postdoc, or (in the absence of a postdoc) a higher level of research output than what a European PhD student typically has, to get your foot in the door at a European “R1” slot as an assistant professor. Niklas gave me great advice on leaning into my strengths and contributions outside of purely my research profile (such as my prolific industry collaborations, influence in policy spaces, and mature career as a respected name in visualization and data science communities outside of academia). While this advice was fantastic (and, spoiler alert, helped me land somewhere I am very happy with), it also helped me to temper my expectations of places that are not quite open-minded enough to really see these strengths of mine as valuable to the academy. I don’t intend to significantly shift how I collaborate with industry partners, nor do I intend to significantly shift the kinds of impact I cultivate in my work. For this reason, my “pure” research output has been on the lower side.</p>

<p>And postdocs are great, don’t get me wrong. But <a href="https://www.frank.computer/blog/2025/04/preparing-to-leave.html">they weren’t a priority for me</a>. Other folks have pointed out that postdocs help you get experience with different institutions, can help you get more funding opportunities, and also just pump out papers. On that last point, I had 2 separate European faculty tell me that a low h-index (like mine) may simply keep me from being considered at a lot of places. They may simply filter me out early on, just based on lack of numbers. (3 of the 4 European faculty I spoke to about job market stuff mentioned my low h-index… which was a much higher rate of mentioning it than outside of Europe. Australia, Asia, and LatAm folks didn’t mention it at all and in the US it was mostly about the “shape” of my current trajectory, which looks “good,” rather than the present state of my paper citations.)</p>

<p>But my point is this: h-index scores play a huge role in some circles. I haven’t spoken to faculty who take it seriously in a one-to-one conversation, but to hear that it plays a role in filtering out candidates who aren’t deemed “mature” enough, means that it <em>is</em> a serious metric, whether or not academics openly admit it to your face.</p>

<p>What is an h-index? Glad you asked. <em>It is an abomination.</em> It is an affront to anyone who wasn’t created in a test tube from birth and trained since adolesence to produce research papers with high citation counts. To put it simply, it is a rough measurement of your paper output and times your work has been cited. Google Scholar defines it as “h-index is the largest number h such that h publications have at least h citations.” So if you have 11 papers, all of which have 10 citations each, then your h-index is 10. But if your 11th most-cited paper hits 11 citations, your h-index becomes 11.</p>

<p>There are a lot of things wrong with h-indexes, from a pure stats perspective. But from <em>my</em> perspective, it is a measurement that incentivizes only activities that participate in that metric’s improvement. As someone with a long history in data science, this ends up having an effect where people who are otherwise good people who want to do good in the world, end up only really caring about producing research (and specifically producing research <em>papers</em> that maximize their citation counts).</p>

<p>Can you have a great research career without focusing too hard on your h-index? Of course. But this single metric has shaped the entire culture of research into an <em>industry</em>. Measurement is one of the most fundamental forms of social control. And h-index is about the consolitation of behavior and activity of scholars into paper-producers. Not great.</p>

<p>For me, h-index is especially not great. My h-index is 5. And I’ve been told that 7 is minimum for TT R1 research, but 9 to 15 is considered strong.</p>

<p>But I would not have done the work that I did, on Chartability or Data Navigator, if my goal was h-index pumping. I would have been a low-effort co-author on at least 5-10 papers led by other people, instead of working with policy organizations and ensuring that my industry peers actually leverage my work. I care about the real-world impact of my research, which <em>ironically</em> is what the h-index score ideally tries to measure! Yet, in the same way that jobs filter out candidates programmatically who don’t say the magic words on their resume, without a high enough h-index, I simply won’t look good enough for a TT research faculty position <em>and</em> will have a much harder time hitting tenure. I simply should not care about shaping how the European Commission, World Health Organization, New York Times, Apple, Adobe, and 150 other open source communities and organizations actually work to make their visualizations more accessible. I simply would just have to hope that I write papers, throw them into the void, and that the resulting “citations” is a good enough proxy for “doing good in the world.”</p>

<p>Now, to be clear: h-index at its best isn’t a metric you should try to <em>game</em> so much as just a way to observe your own progress as a researcher, specifically within the scope of paper output and citations. That <em>is</em> a helpful metric, when it is a tiny part of the overall picture. But it is also primarily only useful for people who manage you but don’t have enough time to listen to you talk about how your work is being written into official guidelines for the Government of France, or something. People want <em>numbers</em> and the h-index is <em>already there</em>, so for convenience it serves that small role.</p>

<p>Now, R1s love to talk about how they value other stuff. But basically, it all boils down to <em>pumping out papers like a factory pumps out dongles</em> and getting utterly loaded with grant money. It’s papers and funding, pretty much as the two major non-negotiables. And is there an expected minimum for either, when going up for tenure at an R1? Nope.</p>

<p>And <em>that</em>, my friends is the problem. If an R1 told you up front that tenure requires 18 papers with at least 18 citations (an h-index of 18) by the time you’re up for review, combined with an expected year grant award average of 500k over your first 5 years, then I’d be able to set healthy lifestyle expectations.</p>

<p>And R1’s rightly argue: we don’t tell you exactly how to get tenure because every academic is unique! Yes, that sounds perfect. Except that they <em>actually</em> only care about papers and funding and without a clear floor or ceiling set for expectations, you are incentivized to work towards papers and grants <em>every waking moment of your life</em>. Spending time with family? Taking vacations? Having hobbies? These activities are only useful if their utility “recharges” you (intellectually, socially, and psychologically) to go out and write more papers. Otherwise, you should probably be writing papers.</p>

<p>R1s, by nature of priding themselves on being at the top of the pack, have no clearly defined floor because they don’t want <em>anyone</em> doing “the bare minimum” at any point. Infinite work is part of the culture!</p>

<h3 id="4-youre-good-at-teaching-is-a-perjorative">4. “You’re good at teaching” is a perjorative</h3>

<p>This section will be short, but both people I TA’d for while at CMU (Scott Hudson and John Stamper) remarked that I was “good at teaching.” And at the time, I took that as a compliment. (I do genuinely believe that both of them meant it in a positive way.) But after discussing my goals and options with a bunch of folks (I asked many people for advice over the last 3 or so years) and everyone who wasn’t at an R1 spoke <em>very highly</em> of teaching and the value of education, while nearly everyone at an R1 didn’t mention it at all. And there were a couple cases where folks actually told me, “oh, those professors said you’d make a good teacher? They might have been telling you that you aren’t a good researcher. People will do that.” (Again, I don’t think Scott or John meant that at all, but the culture of teaching-as-perjorative in R1s is really disappointing.)</p>

<p>In <em><a href="https://academictrap.wordpress.com/wp-content/uploads/2015/03/bell-hooks-teaching-to-transgress.pdf">Teaching to Transgress</a>,</em> the immortal bell hooks writes, “The classroom remains the most radical space of possibility in the academy.”</p>

<p>Teaching has been central to promoting my work to pracitioners, and also building relationships and collaborations. During my PhD, <a href="https://www.frank.computer/talks/">I’ve given over 100 talks, workshops, interviews, and guest lectures</a>. And teaching students, I truly believe, is one of the most important things that I can do with my time and the knowledge I’ve accrued.</p>

<h3 id="5-im-not-the-main-character-of-this-story">5. I’m not the main character of this story</h3>

<p>That final note, on the radical value of teaching, leads me to my last point: I’m not the main character of this story (my “story” being the great quest to make data science, data visualization, and information in society more accessible for people with disabilities).</p>

<p>I’ve spoken about this before in several talks (see <a href="https://youtu.be/W9LDW-t09oY?si=GpnY9y4kQzP___h-&amp;t=91">my keynote at DDD Brisbane</a>, for example): At the start of my PhD, we took a class (essentially propaganda for doing a PhD and so on) about how you should frame your great research problems like a big quest and the villain (your “big P problem”) is a terrible dragon that only you, the hero of the story, can defeat using a mythical sword (which is essentially your chosen methodology and approach to research). You are encouraged to talk about your narrative as though you’re the champion who solves all the hard problems.</p>

<p>And this is a sickness at the heart of academia: believing this lie. The hero believes they are the only one who can defeat a dragon? Obscene. And in that class, we were encouraged to talk about ourselves within the “hero’s journey” and how we “overcame” the “graveyard” of other, failed researchers who tried to solve this problem (aka, like the “Related works” section of a paper, if you will).</p>

<p>This narrative accomplishes a few things. First, you come off as an asshole. And second, downstream from that (and the solipsism embedded in the narrative), you alienate yourself from others. Individualism was a useful framing, as situated in history. It helped us build language for autonomy, rights, freedoms, agency, and so on. But individualism is a state of <em>deconstruction</em>. Individualism only exists because, in order to see it, you must <em>actively strip the subject of all meaningful connection to others</em>.</p>

<p>There are many reasons that mental health during grad school is catastrophic (again, mental health being another reason that goes with point 2 above), but alienation and the forging of your narrative as a person who is, as an “individual,” the sole hero, is a huge part of that.</p>

<p>Why is being an “individual” problematic? Because to understand something as “a being that cannot be divided further” (what “individual” literally means), you must strip all other connections and relationships from a person and observe only what remains. For human rights, or concepts like <em>independence</em>, it is actually liberating to recognize that we have the agency to free ourselves from our chains. As a <em>concept</em>, it can be useful! “Okay, so individualism is good?” you might ask.</p>

<p>No. As I said earlier, it is a state of <em>deconstruction</em>. The “individual” is a thought project: useful in philosophizing “what if?” But nobody <em>in reality</em> is actually free from all connections and relationships. We breathe the same air as other organisms, we share this literal earth and its resources, and so much more. Nothing material and real is “an individual.” We construct this ideal state of a person because it helps us understand things like “autonomy” and “freedom” and so on.</p>

<p><em>But again: the state of the “individual” can only be understood as having been theoretically stripped of all connection.</em></p>

<p>So is the individual hero even <em>real</em>? No! The hero is a fantasy within a fantasy; made up. It’s only utility is in communicating some highly-congealed, super-concentrated narrative about our own contributions to the world. (And, to some degree, we should be able to understand ourselves as actually contributing to the world around us. It’s healthy to do that!)</p>

<p>But we <em>shouldn’t be fooled:</em> we are incapable of writing a research paper without language (requires people), technology to write (requires computers made by others), and without an audience (requires readers). The community the hero saves is actually fundamentally <em>more important</em> than the hero will ever be. The community orients itself and understands the dragon as a danger, the community is where celebration takes place when victories are won. The community, in most all fiction, is also where the hero finds a place to eat, sleep, and fall in love.</p>

<p>And R1’s want you to believe that the pinnacle of selfhood is individualism. This alienation drives you towards an undistracted, selfish pursuit of glory. The R1-mindset benefits from this! R1 culture hinges on “individuals” like this.</p>

<p>This is where things get interesting: <em>Individualism</em>, ironically, is a building-block for other social order to arrange itself. Because it is a state of deconstruction, hypothetically speaking, it invites us to imagine new social orders (or… spoiler alert, re-invigorate existing ones).</p>

<p>And this is why individualism is so dangerous: if you value yourself, as a disconnected entity from others, as the highest form of being, then you will gravitate towards ideologies that serve yourself over others. Individualism, because it is a state of deconstruction, demands that something is constructed after the strings and tethers that connect us to each other have been cut. And this is why academia has made itself into a pyramid scheme! The alienated worker becomes easier to exploit. Divide and conquer made manifest in the workplace! And whether R1 faculty are conscious of this fact or not, their productivity hingest on this meta-culture remaining in power. You need a “PI” who gets money and benefits from all of the research projects and work of everyone under them that they fund. They need to have that last-author position so their h-index can do big numbers.</p>

<p>But anyone who is reasonable, who breaks from this myth, learns quickly how important community, connection, and interdependence are. Because “individualism” is a state of deconstruction… we can construct <em>better worlds</em> than hierarchies and oppressive systems. And some R1 researchers do this! Really, I know of quite a few. Again, Stacy Branham and Jen Mankoff both seem to have been successful in an R1 role despite working for equity with their workers as a priority. But the <em>industry</em> that is R1 research doesn’t incentivize this. At large, you have to work very hard against the grain not to perpetuate these oppressive systems and still, at the end of the day, be “productive” as a researcher.</p>

<p>Ultimately, my response to the “hero” narrative has always been this: my whole career in tech, and even before (as a barista, organizer, service worker, carpenter, painter, and paper boy) was in support of others and their goals and lives. I’ve always focused on empowering what other people can do. I’m the blacksmith in the village who makes a sword that might be capable of slaying dragons… <em>in the right hands</em>. But do I slay the dragon, myself? Not a chance. And in fact, many other dragons exist. So part of my work as a toolmaker has been to think more in <em>blueprints</em> that other toolmakers can follow. This has been the focus and style of my work on <a href="https://dig.cmu.edu/data-navigator/">Data Navigator</a>, for example.</p>

<p>For this reason, being a toolmaker, teaching-centric environments also make more sense as well. I highly doubt I’ll solve the big P problems that I set out to tackle. But through teaching and mentoring, p<em>erhaps someone else will</em>.</p>

<h2 id="so-why-take-the-path-of-most-resistance">So… why take the path of <em>most</em> resistance?</h2>

<p>Anyone at an R1 who looks at my profile next to a star like <a href="https://scholar.google.com/citations?user=jZa4SPIAAAAJ&amp;hl=en&amp;oi=ao">Franklin Li</a> (my fellow lab mate and someone <em>genuinely pulling it off</em> with a higher h-index than many folks have before they even go up for tenure review), it would be hard to consider me a top candidate for an open position.</p>

<p>As I was assembling my job applications last Fall, I had to prepare myself for the reality that R1s will likely not even select a candidate like myself (I had early interviews at 2, but didn’t move forward). I also had a fast, early rejection to Uni Vienna. An insider let me know I was soundly beat by people with many, many papers published <em>and</em> millions in grant money raised already. I simply do not have enough traditional research output under my belt to even get on the radar of place with high research output. And these rejections (or not even recognition enough to reject… more like <em>neglection</em> for most of them), became a signal that this is likely not be the sort of environment I’d want to get into anyway!</p>

<p>If I want to create a world where I can do hands-on research, minimal “management,” have opportunities to demonstrate my impact in more meaningful ways than simply papers/citations/grant money, promote and cultivate an environment where people who work for/with me are treated more fairly (and this is structurally incentivized), and ultimately produce students and mentees who are equipped to go out in the world and make it a better place… then an R1 is not an easy path.</p>

<p>This is why I’m thankful for a tiny little slice of universities and colleges called “PUIs” or “primarily undergraduate institutions” that still expect faculty to do research and have a research agenda. The research-focused PUIs, which you can learn more about from this fantastic tool/blog by <a href="https://cs-pui.github.io/">Evan Peck</a>, have been my priority this season.</p>

<p>One person who gave me immensely helpful perspective, nearly two years ago, was <a href="https://scholar.google.com/citations?user=ziP-50wAAAAJ&amp;hl=en">Ken Holstein</a>. Ken is among the best. He is faculty here at CMU and is massively productive as a researcher. He is truly a top intellectual in his field. But I looked at how much he worked, writing grants, constantly managing things, and so on, and wondered if that was really the sort of life I wanted to emulate. Picking his brain helped me peek into what life has been like for him in his first few years as an assistant professor.</p>

<p>And I was already curious about PUIs, but Ken let me know that a colleague of his recently started as an assistant professor at a little liberal arts college (doing research and teaching) and was “now living their best life.” Their work-life balance seemed to be far better to me than pursuing glory. (And again, no offense intended for folks that do! Really, if I thought I could pull off a career like Ken’s or Jen’s or Stacy’s, then I’d really be willing to give it a shot.)</p>

<p>But later I got advice from a whole array of folks, <a href="https://evanpeck.github.io/">Evan Peck</a>, <a href="https://crystaljjlee.com/">Crystal Lee</a>, <a href="https://jonathanzong.com/">Jonathan Zong</a>, <a href="https://gotdairyya.github.io/">Derya Akbaba</a>, <a href="https://ischool.umd.edu/directory/joel-chan/">Joel Chan</a>, <a href="https://www.laura-garrison.com/">Laura Garrison</a>, <a href="https://www.fatimakoli.com/">Fatima Koli</a>, <a href="https://miriah.github.io/">Miriah Meyer</a>, <a href="https://katta.mere.st/">Katta Spiel</a>, <a href="https://www.namwkim.org/">Nam Wook Kim</a>, <a href="https://oddletters.com/">Mols Sauter</a>, <a href="https://scholar.google.com/citations?user=3cGVmqsAAAAJ&amp;hl=en">Torsten Möller</a>, <a href="https://arvindsatya.com/">Arvind Satyanarayan</a>, <a href="https://cs.uchicago.edu/people/alex-kale/">Alex Kale</a>, <a href="https://venkateshpotluri.me/">Venkatesh Potluri</a>, <a href="https://laurasouth.com/">Laura South</a>, <a href="https://www.albertocairo.com/">Alberto Cairo</a>, and many more… and I kept getting the feeling that as long as I was willing to accept that my research output would be lower (basically because of not having PhD students), then I would most certainly be able to live a life I would be much happier with. A research-focused PUI instead of an R1 became aspirational.</p>

<p>(As a takeaway, if you’re considering an academic life and wrapping up a PhD, speak to folks who are research staff, teaching faculty, and research faculty at a variety of institution types, like R1s and PUIs, in a variety of contexts, like domestic and abroad. Because of this, I had a much better sense of what I figured would work best for me and the life I want to live. Again… individualism is a myth! I wouldn’t have made my own decisions if I didn’t have a whole community of people rooting for me and helping me to succeed.)</p>

<h2 id="my-big-announcement-ive-accepted-my-dream-job">My big announcement: I’ve accepted my dream job</h2>

<figure style="display: block; width: 90%; margin-left: auto; margin-right: auto;">
    <img src="https://www.frank.computer/images/cal-poly-selfie.jpg" alt="Me, smiling and taking a selfie on a sunny day in front of the large-lettered CAL POLY sign." />
    <figcaption>Cal Poly! I'll be an "assistant professor of data science."</figcaption>
</figure>

<p>So I want to now officially announce that I’ve accepted a job at <a href="https://www.calpoly.edu/">Cal Poly</a> (California State University Polytechnic), in San Luis Obispo. It’s a PUI that expects research. It ticks every box I could want. The faculty there have had outstanding research careers, yet the top priority for my tenure evaluation will be on my teaching. People clearly explained that I am evaluated based on having 2 “externally recognized” contributions per year: one might be a peer reviewed paper, another a grant, but I can also get creative (but <em>they let me know the floor!</em>). The master’s students and undergraduates I would be working with are top notch, the pace of life is far more balanced than an R1, and (through my interview process) it was abundantly clear that both my past industry career <em>and</em> my a-typical avenues of impact are highly valued.</p>

<p>That last part sold me on Cal Poly. <a href="https://ischool.umd.edu/directory/stephanie-valencia/">Stephanie Valencia Valencia</a>, over at UMD told me to “go where you are celebrated” and that “you will know once you’re there if you are or not.” Faculty and the students at Cal Poly saw my industry experience and style of collaboration as a huge benefit to the university. And folks with impressive research agendas spoke very highly of the level of maturity and breadth of impact from my research. It was clear to me that, unlike the R1s I had interviewed at, my past and present styles of work were considered an actual benefit. (Again: I wasn’t created in a test tube and cultivated from adolescence to produce top-cited research papers… I have a broad and varied background. It felt good to be seen for that and valued.)</p>

<p>I am a bit melancholy, to some degree, that I am not a more-typical sort of researcher with a typical profile of research, style of work, set of politics with workers, and arrangement of interests. I would have loved to imagine myself as the hero who slays the dragon at an R1 and gets all the glory. But that isn’t me. I’d be living a dishonest life if I thought that was even what I really wanted, too.</p>

<p>So here’s to a new adventure and embarking on this next arc of my story.</p>

<figure style="display: block; width: 90%; margin-left: auto; margin-right: auto;">
    <img src="https://www.frank.computer/images/cal-poly-weather.jpg" alt="Me, smiling and standing on a beach on a sunny day. I am squinting in the sunlight as the still blue water shines behind me." />
    <figcaption>And I am quite thankful that the next arc of my story is back on the west coast. The smell of the Pacific? The seafood? The mix of cuisines and culture? Oh... and temperate weather year-round? Immaculate.</figcaption>
</figure>

<p>Oh and special thanks to Dominik and Patrick, my advisors. They’ve both been nothing but supportive of me and my goals. I’m thankful for them (and Ken and Jen, both also on my committee) for being so generous with time and advice.</p>]]></content><author><name></name></author><category term="personal" /><category term="academic life" /><category term="academia" /><summary type="html"><![CDATA[Announcing: I've accepted a tenure-track research and teaching faculty position! But here's why R1s weren't a priority for me this job cycle.]]></summary></entry><entry><title type="html">Life advice to a graduating student</title><link href="https://www.frank.computer/blog/2026/02/life-advice.html" rel="alternate" type="text/html" title="Life advice to a graduating student" /><published>2026-02-05T00:00:00+00:00</published><updated>2026-02-05T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2026/02/life-advice</id><content type="html" xml:base="https://www.frank.computer/blog/2026/02/life-advice.html"><![CDATA[<p>Obviously, my advice here is intended towards students who are graduating even though it is written towards myself. Some stuff I say here, like referencing my upbringing or my theology/philosophy degree, are not broadly applicable to other people.</p>

<p>But presenting this as an honest letter from myself now (about to finish a PhD) to myself 10 years ago (about to finish my undergrad) is important because many of the anxieties and the fears I had then are simply not present now. I’m also on the job market, but I’m in a completely different space (mentally, physically, socially, and materially).</p>

<h2 id="background">Background</h2>

<p>This blog post started from a few places. Every now and then a student of mine asks me for life advice (and I’ve never really formalized it). But often, the way I structure the advice is the same: internal, external, and specific advice related to our present-day philosophies and cultures of the “work-self” and “work-life” in the US and tech industry.</p>

<p>So, in order for the advice below to really make sense, I first just want to add context about myself and my past:
I am diagnosed ADHD (was diagnosed in the 90s), am also neurodivergent outside of the scope of ADHD (whatever we call that isn’t too relevant to me), I have lived with a few disabilities (one of which required multiple surgeries before I turned 19, which nearly killed me), and my family life was a mess.</p>

<p>But my mother at the time was a drug addict on social security disability and a public school teacher’s pension (which she took early). At 18, she had kicked me out, so I was on my own. My father is homeless and completely out of the picture.</p>

<p>At 19, I was recovering from a major surgery. I had no job prospects and no chance at college, but I was still on my mom’s health insurance during my surgeries (because I was still technically 18). My final surgery was 5 days before I turned 19. I barely escaped in time. (Later, Obamacare extended my insurance until 26, which definitely saved my life.)</p>

<p>But I did odd jobs and side work for a few years. I became best friends with my now-wife at a pizza shop where she was a supervisor. And I eventually got fired from that job (the only job I’ve been fired from, mind you).</p>

<p>The downward spiral that sent me on, in a roundabout way, led me to college. I volunteered with youth programs, one of which was run by my upstairs neighbor (in the duplex I was in). When I was chatting with her about my woes (like, “I can’t even keep a pizza job, what am I good for?”), she suggested that I speak to the academic dean at a small liberal arts college down in Everett. They value community work and focusing on that part of myself in an application, in addition to my inclination towards philosophy, might help me get a scholarship.</p>

<p>I did, and it paid most of my way through my 5 years.</p>

<p>But graduating was hard. I had no safety net. I also had just picked up a second degree (in “computer information systems”) to try to get a tech job, since I was pretty confident at the time that a theology and philosophy degree wouldn’t cut it.</p>

<p>So, if I could travel back in time, this is what I’d say to a student like me, full of anxiety and fear and an unhealthy amount of self-loathing:</p>

<h2 id="my-advice-to-a-soon-to-be-grad">My advice to a soon-to-be grad</h2>

<p>Over the years, I did a lot of stuff that worked out in the end. A bunch of stuff didn’t. But the first thing that I’m grateful for was getting my “internal” self sorted out, despite the fact this is a pretty self-centered (literally) set of things to think about.</p>

<h3 id="internal-stuff-akin-to-the-volition-skill-from-disco-elysium">Internal stuff (akin to the “volition” skill from disco elysium)</h3>

<p>The important parts of having a happy life (from an internal lens) have been:</p>
<ol>
  <li>knowing yourself and continuing to discover what that means</li>
  <li>knowing what you can and cannot change in your environment</li>
  <li>having a willingness to navigate the tension between pushing for something and letting something go</li>
  <li>and getting good sleep at night</li>
</ol>

<p>On knowing yourself, I often say that within me are two wolves: one that loves and one that hates. (These are both different shapes of my neuro-divergence.) Basically, this meme from <a href="https://www.facebook.com/photo/?fbid=122200662602549725&amp;set=gm.1356180386415436&amp;idorvanity=640421687991313">Qasharah Reid on facebook</a>:</p>

<figure style="display: block; width: 60%; margin-left: auto; margin-right: auto;">
    <img src="https://www.frank.computer/images/two_wolves.jpg" alt="Inside you are two wolves. A black wolf and a white wolf stare at each other with the moon behind them. The black wolf is labeled ADHD, the other Autism." />
    <figcaption>My love of data visualization (left) and my hatred for inaccessibility (right).</figcaption>
</figure>

<p>Knowing that I get bored and depressed means that I need to seek stimulation. And two pure forms of motivation within me are my appreciation for the work of people who visualize data and my frustrations with the lack of access that people with disabilities have. Love and spite have carried me through so much. <em>I’ve got that dog in me,</em> so to speak.</p>

<p>On sleep: I’ve found that creating coping mechanisms and developing strategies that get me to go to bed and wake up at the exact same times every day affects my happiness, executive function, and general brain health more than any medication. It’s pretty much the number one thing to focus on, especially if you’re doing something new and hard (like looking for or getting a job).</p>

<p>And in the workplace (and in life, really), ADHD’ers often push far too hard or dig too deep into something at the wrong times, while abandoning or letting things go that should have been finished/wrapped up/polished. And knowing this about yourself can help you decide how to move and navigate complex landscapes like tech work. Stopping yourself and saving your energy are so key.</p>

<h3 id="externalsocial-advice">External/social advice</h3>

<p>In terms of other things (outside of an internal+individualistic lens), it’s really important to also:</p>
<ol>
  <li>DeLashmutt gave you this advice and it is 100% true, but “learn when to say no, because it allows you to say yes to other things” (and how to say no well). Your rejection dysphoria, combined with the fact you are easily distracted by new things, makes it very hard to say no to things. But people won’t hate you for having other priorities sometimes. (Do remember to say yes occasionally, though! Don’t always do everything for yourself, of course!)</li>
  <li>Find happiness with the people in your life and not put yourself + your work too highly in your relationships (have hobbies, friends out of work, etc)</li>
  <li>Do things that are socially good and gratifying (advocacy, community organizing, or volunteer work) because this will remind you that you have a place in this world that isn’t centered on a job</li>
  <li>Accept the kindness of others (this is especially important because you’re the sort of ADHD’er who is heavily self-deprecating, depressed, and so on). Allowing yourself to be loved is pretty much central to a life well-lived</li>
</ol>

<h3 id="resisting-work-culture">Resisting work culture</h3>

<p>Job-specific stuff is hard though. People kept telling me during my undergrad that, “you will be just fine, you’re going to do great” when I’d remark that I was worried about my future and how I’d find work. And I genuinely felt like I was going to crash out almost every day, but especially when someone would say that. It felt so dismissive. Do they not know that my life is barely hanging on? Do they not know that I have no safety net? My next decisions might determine whether I live or die? Whether I am miserable? Whether I ruin the lives of my favorite people, who I love dearly? There is so much at stake.</p>

<p>And finding a job <em>was</em> hard, but navigating my feelings of confusion and self-discovery during and after getting a job were even harder. I didn’t want to pick the wrong line of work! I wanted to pick the perfect job, something I was happy with. I wanted to be proud of myself for once. I also needed good health insurance (I have other disabilities, so this was really important for me). I couldn’t just do part time stuff, organizing, or live like a Bohemian creator (despite pretty much only ever wanting to write fiction).</p>

<p>Practically speaking, the shortform things I learned from experience were:</p>
<ol>
  <li>Casting a wide net (applying to a lot of jobs) was good. Being open to new things was good.</li>
  <li>Working hard was good, but only because I learned new skills that opened up new doors (not because I was ever rewarded for hard work)</li>
  <li>Traveling cross-country (and willingness to move) really worked out well but it eventually burned you out. I’d pick a spot with good job prospects (and be willing to move to wherever that is), rather than bounce all around, if I could do it all again.</li>
  <li>In-person work is actually awesome because of people. You’ll get less work done than at home, but you can be more social, which has been good for you in life. You’ve met some of your best friends because of in-office work, despite being your happiest and most productive when you work from home. So don’t be afraid of in-office or hybrid work at first.</li>
</ol>

<p>The deeper advice that I wish someone had given me instead of “you’ll be fine” was this:</p>
<ol>
  <li>First, <strong>stick to your morals</strong>. You barely survived (because of luck!!) getting a job out of college that probably would have pressured you to do really terrible things. Instead, you chose work where you can look back and not have an overwhelming sense of crushing regret. You said no to projects at some of your jobs that also would have led to regret. You can stand up and say you are uncomfortable with something. And you can push back on things, too. Often times, it seemed like you were the only one to say things. But every time you did, someone thanked you (in the moment or later) about the fact that you had a backbone. Don’t let that go because you’re afraid of getting fired.</li>
  <li>Second, continuing that: <strong>don’t overvalue any job, ever</strong>. It’s good you pushed really hard to learn skills. Definitely put yourself in situations where you need to swim up to the surface. You thrived in those environments. But also, I’m happy because I either left places that sucked or worked to make sucky places good. So treat every single job, no matter how important it seems, as equally or less important than a waste facilities engineer or garbage collector. Software engineers might get paid more than those jobs, but that doesn’t mean that software engineers are “better” people. Those jobs are often difficult and thankless but central to a functioning society. Nobody dreams of having those jobs and nobody who has them treats their job like their whole personality. Whatever job you get, you should have the same attitude about it (it’s just a job).</li>
  <li>Building on that: <strong>Separate yourself from the personality of your job</strong>. You are a whole person and your work is more of a duty to yourself and society than it is some kind of mystical calling or reflection of your value. Inner value comes from you and your ability to love yourself. Outer value is often superfluous, but not meaningless, either. So seek connection with folks who know (or want to know) the whole collection of who you are, and not just people who want to judge you based on your work.</li>
  <li>Work less towards jobs focused on titles, positions, and identities. Simply <strong>work towards agency and material outcomes that make you happy in life</strong>. For me professionally, this is why I did a PhD: it allowed me more agency to act on my agenda (which I didn’t figure out and lock in on for several years - so it was really good that I waited to start one). Material outcomes and agency in my life right now looks like: having enough space for a gaming room with my partner, being able to take care of our dog and cat and give them a good life, having a queen-sized bed, having an espresso machine, being able to afford medical bills, working 40 hours or less every week (and mostly from home), and so on.</li>
  <li>On identity: <strong>Don’t try to “become” someone. Just try to do things.</strong> The depression and internalized ableism that I’ve had to overcome over the years made me recognize that wanting to “be” something was shallow. I kept wanting a title or some external measure that validated me. What you can do and what you have are far more important than who you think you “are.” And I’d even argue that seeking external validation, for it’s own sake, is destructive. It kept me, at many points in my life, from doing things. Who you are doesn’t matter, really. If you can do what is good and what you believe in and you have a good, healthy, and fun environment in your life, then you’re on the right track.</li>
</ol>

<h2 id="being-and-becoming">Being and becoming</h2>

<p>This final point has been critical for me. This is <em>ontology</em> in philosophy. And it was queer theory that helped me understand this. And queer theory is still a really useful way to reflect on my life (in addition to other things, too). Don’t discredit your theology and philosophy degree. In many, many ways, that has been more important for your happiness than the computery one (even though you <em>definitely</em> needed the computery degree to land your first job and probably also to get into your PhD program).</p>

<p>But on identity: If you take the “ship of Theseus” problem (which is, in brief: at what point, if ever, does the ship cease to be “the ship of Theseus” as wooden boards are replaced), the problem is definitely outside of the question. “The ship of Theseus” has 2 ontologically important dimensions:</p>

<p>First (and this is rarely discussed) is that it is “the ship” of Theseus. The question takes for granted what it can do. It is assumed that it sails across water. It takes Theseus from one point to the next. And the unquestioned parts of our identity are often the most important ones, in this same way: be thankful for your thinking brain, your fingers, your eyes, your ability hear and to laugh, and your motivation to do things and then make them come true. Your agency, the culmination of action you are capable of, says more about who you really are than a name could.</p>

<p>Second (the part most people care about, which is the nominative, or unique, proper noun part of the question), it is rarely discussed, but the first questions to ask about “of Theseus” are outside of the structure of the exercise: why does it matter to even have a ship named? Does Theseus own it? If it didn’t have his name, would any outcomes for Theseus (or the world) change at all? In short, does the name afford new meaning, action, or material outcomes? Rarely, this is the case. What matters are the social and cultural forces that enable Theseus and the ship, and not the relatively small question of what the ship is called. In our own lives, our identities are the same.</p>

<p>In queer theory, Judith Butler in particular, helped me understand that “being” is far less useful of an ontological focus than “becoming” is. We are changing, all the time. And we are always <em>in transition</em>. We are <em>performing</em> being, which in aggregate, is <em>becoming</em> something of a self, of who we <em>will be</em>. Knowing who we <em>are</em> in the present is only useful if it helps us direct the trajectory of where we are going, and who we are <em>becoming</em>.</p>

<p>And in that sense, trying to gain a title or job position is almost entirely meaningless once you finally gain it. If you are climbing a mountain to reach the peak, then you have superficially invented an end to yourself. A peak is only useful as a place to rest or reflect or change course (more like a <em>landmark</em> than a <em>destination</em>).</p>

<p>Our being is <em>socially constructed</em> and we participate in that construction.</p>

<p>And so in the ship of Theseus question, the physical construction of the ship is irrelevant. What matters more is what the ship can do (that makes it a “ship”) and then the social assignment of the proper name, who gave it, and why. It will always remain the “ship of Theseus” so long as there is a social consensus, regardless of the wood that presently holds it together.</p>

<p>Anyway, you will be just fine, you’re going to do great.</p>]]></content><author><name></name></author><category term="personal" /><summary type="html"><![CDATA[Advice I would have given to my past self, who was graduating college.]]></summary></entry><entry><title type="html">Accessibility in visualization, full course out now!</title><link href="https://www.frank.computer/blog/2026/01/ova.html" rel="alternate" type="text/html" title="Accessibility in visualization, full course out now!" /><published>2026-01-30T00:00:00+00:00</published><updated>2026-01-30T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2026/01/ova</id><content type="html" xml:base="https://www.frank.computer/blog/2026/01/ova.html"><![CDATA[<p>Check out my introduction video here:</p>

<div style="position:relative; overflow: hidden; width: 100%; padding-top: 56.25%;">
    <!-- <iframe title="vimeo-player" src="https://player.vimeo.com/video/1156863139?h=9b7ac82d97" width="640" height="360" frameborder="0" referrerpolicy="strict-origin-when-cross-origin" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share"   allowfullscreen></iframe> -->
  <iframe style="position: absolute; top: 0; left: 0; bottom: 0; right: 0; width: 100%; height: 100%;" src="https://player.vimeo.com/video/1156863139?h=9b7ac82d97" title="vimeo video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>
<p><br /></p>

<p>And please, go and view the whole course online over at the <a href="https://openvisualizationacademy.org/courses/accessibility-in-data-visualization/introduction/introduction/">Open Visualization Academy</a>.</p>

<p>As part of the release, I also helped to audit and give feedback on the accessibility of the Open Visualization Academy, which you can find more details about in their <a href="https://openvisualizationacademy.org/accessibility/">accessibility statement</a>.</p>]]></content><author><name></name></author><category term="personal" /><summary type="html"><![CDATA[In collaboration with Alberto Cairo's Open Visualization Academy, I have released a full course on accessibility in visualization.]]></summary></entry><entry><title type="html">Recap: 2025</title><link href="https://www.frank.computer/blog/2026/01/recap-2025.html" rel="alternate" type="text/html" title="Recap: 2025" /><published>2026-01-19T00:00:00+00:00</published><updated>2026-01-19T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2026/01/recap-2025</id><content type="html" xml:base="https://www.frank.computer/blog/2026/01/recap-2025.html"><![CDATA[<p>I want to start off this little recap by stressing that I <em>know</em> I am privileged. I have no doubt. I have had a wonderful year, full of friends and adventures. The world felt like it had begun to disintegrate in 2025, and yet I found a universe of hope in it, too.</p>

<p>I had a packed year. In some ways, I look back and wonder how I got anything done at all! I got to visit many amazing cities all over: Anaheim, DC, Miami, New York, (a secret location), Vienna, Melbourne, Brisbane, Sydney, and Auckland. 10 big city trips in one year? I’ll probably never top this. And honestly? It was a bit more traveling than I would have wanted. But it was lovely. And Shelby got to tag along for all of these trips, at least in part, except for Miami (which we had both already done together years ago).</p>

<p>But 2025 was good. I think any one of these sections below would have been a highlight of a year, so 2025 was really just dozens of years in one.</p>

<h2 id="the-early-months-pure-horror-and-sigcse-crushing-my-soul">The early months: Pure horror (and SIGCSE crushing my soul)</h2>
<p><em>Jan - Apr</em></p>

<p>I won’t write too much here, but the first two months of 2025 were painful. Trump taking office, the cuts to research, ramping up ICE raids, the overall social chaos that ensued… it was not good. It still isn’t good (and in many ways <em>worse</em>), but it came as a bit of a shock at the time.</p>

<p>And <a href="https://sigcse2025.sigcse.org/details/sigcse-ts-2025-affiliated-events/4/Accessibility-and-Disability-in-CS-Education">SIGCSE</a> happened in Pittsburgh (that’s the Special Interest Group for Computer Science Education). There was an all-day workshop on accessibility and I got my admission paid for by CMU because I had a short talk accepted. I was starry-eyed to be able to attend. Many people I have looked up to for years were there. These were the people I had hoped to come into community with (transitioning from industry to academia was hard, but transitioning from visualization to accessibility has been even harder).</p>

<p>My talk was okay. I was nervous talking about how I am a practitioner who researchers with practitioners, and how education matters for my work. I am not sure it landed with that crew (again: accessibility folks are lovely, but getting “in” with the academic community is quite hard when you’re not totally aligned with them already). I could tell it didn’t land because not a single person was remotely interested in talking about practitioners learning. My takeaway is that “CS education” is very clearly, culturally speaking, about <em>formal</em> education and not all of the other settings where education takes place (for better or worse). I also won’t unpack my insecurities here, but I had a conversation that filled me with so much impostor syndrome that I was physically depressed for a whole month after.</p>

<p>But the <em>real</em> theme of the day emerged as time went on. The big concerns for everyone were about federal cuts to research and the <a href="https://www.ada.gov/topics/title-ii/">impending Title II changes</a> for public universities. (So, in some way it really wasn’t a big deal whether or not my talk was informative/helpful/etc. That doesn’t mean my impostor syndrome didn’t flare up, though! I felt very “corporate” and like some kind of non-academic in that space.)</p>

<h2 id="adobe--quansight-collaborations">Adobe + Quansight collaborations</h2>
<p><em>Jan - Mar</em></p>

<p><em>Good</em> news about the first two months? My collabs with Adobe and Quansight were both going really well. At Adobe, we were working on some neat stuff for their open source library. At Quansight, we had just openly published our <a href="https://bokeh-a11y-audit.readthedocs.io/">200+ page evaluation of Bokeh’s accessibility</a>. That report is a big deal. Nobody has openly shared something like that before, which is pretty special.</p>

<h2 id="travel-anaheim-for-talks-at-csun-finally-good-donuts-and-some-much-needed-sun">Travel: Anaheim for talks at CSUN, finally good donuts, and some much-needed sun</h2>
<p><em>Mar 10 - 16</em></p>

<p><a href="https://www.csun.edu/cod/conference">CSUN</a> happened. (CSUN is short for the California State University of Northridge’s Assistive Technology Conference.) I gave a talk with the Highsoft folks (with Ted Gies from Elsevier) on our work making visualizations more personalizeable. This was the pre-paper-talk talk on our work I dubbed “softerware.” I think it went well (at the very least we all had fun). I also gave a short talk about my work in a session on making diagrams more accessible.</p>

<figure>
    <img src="https://www.frank.computer/images/stevie_wonder_2025.jpeg" alt="Stevie Wonder, holding a microphone, surrounded by fans at CSUN." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Stevie Wonder makes an appearance almost every year! I didn't take a photo personally, so credit to this shot goes to <a href="https://www.linkedin.com/pulse/csun-assistive-technology-conference-2025-paul-j-adam-883tc/">Paul Adams</a> over on linkedin.</figcaption>
</figure>

<p>And Shelby and I <em>finally</em> had good donuts, after years of not being able to enjoy some together. Trust me when I say that Dunkin has <em>destroyed</em> good donut culture everywhere except the west coast. God bless Los Angeles for holding strong against the sloppification of donut culture. (For the record, we got a wide variety from the Donuttery in Huntington Beach. 10/10 donuts.)</p>

<figure>
    <img src="https://www.frank.computer/images/donuts.png" alt="A box of donuts." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Real donuts, not that hog wash garbage they sell at Dunkin or whatever crap seems to be popular at those places that sell over-topped donuts for 6+ USD each. These were *classic* and *utterly flawless*. I hope to go back someday.</figcaption>
</figure>

<h2 id="thesis-proposal-i-passed">Thesis proposal (I passed!)</h2>

<p><em>Apr 18</em></p>

<p>I passed my thesis proposal! Woo! (5 days before my birthday, a present to myself hehe.)</p>

<figure>
    <img src="https://www.frank.computer/images/thesis_proposal.jpg" alt="I am standing in the front of the room, leaning on the podium." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>I had fun, despite having pretty cerebral engagement by my committee (Huge thanks to Dominik Moritz, Patrick Carrington, Ken Holstein, Jen Mankoff, and Tamara Munzner).</figcaption>
</figure>

<p>One of the best parts was “camping out” and completely unplugging in order to get my thesis proposal written in time. I don’t have pics from this part (because of the unplugging), but it was very nice. I hope to turn this into a tradition for future writing sessions, if I can. Next time, maybe a cabin on the top of a mountain or something would be nice.</p>

<h2 id="travel-washington-dc-for-a-talk-at-umd-and-a-very-good-day-off">Travel: Washington DC for a talk at UMD and a very good day off</h2>

<p><em>Apr 29 - May 1</em></p>

<p>Joel over at UMD’s HCIL invited me to give a proto job talk, which I very much enjoyed. Everyone was generous with feedback, even critical. I had another bout of impostor syndrome here because I was asked (in regards to my mention of my tech industry funding), “why not just get a job in industry? why work in academia?” It’s a hard question to wrestle with, especially when the temptation to return always exists (and <em>especially</em> with the current state of things right now).</p>

<p>But academia does think a bit too highly of itself. The desire for a “pure” environment where people pursue theory and knowledge unbounded by profit or military incentives <em>is</em> something we would have in a healthy environment. But we don’t have that! And asking someone (whose work is very much not profit-driven, like myself) why they don’t get a job that “makes money” in industry is essentially asking someone why they are doing the work they are doing at all. (Accessibility work is something that many companies want, but few are willing to fully fund for themselves. I cannot do the work that I want to do full-time in an industry role, unless it is explicitly set up for “foundational” and not “product” focused research and innovation.)</p>

<p>Shelby and I also enjoyed a great evening + following day in Washington DC. We did museums and then poked around Georgetown (and went to a really lovely cat cafe!!).</p>

<h2 id="pycon-in-pittsburgh">Pycon in Pittsburgh</h2>

<p><em>May 16</em></p>

<p>Gave a wonderful talk, with Pavithra from Quansight, at Pycon US, which was in Pittsburgh this year. It was great to talk to the Python community about accessibility and visualization, which is something that the Python ecosystem (writ-large) has not really considered.</p>

<p>You can catch <a href="youtube.com/watch?si=YD7-U1MxGfdgXFSI&amp;v=WZMo6QG1j98&amp;feature=youtu.be">our Pycon talk on youtube</a>, if you want.</p>

<h2 id="travel-miami-for-a-talk-at-outlier-board-games-and-cuban-bakeries">Travel: Miami for a talk at Outlier, board games, and cuban bakeries</h2>

<p><em>June 9 - 15</em></p>

<p>I was invited down to Outlier in Miami, both for a talk on <em>Softerware</em> (this time expanding from my CSUN talk which was <em>what</em> we did into <em>why</em> and <em>now what</em>).</p>

<figure>
    <img src="https://www.frank.computer/images/outlier.jpg" alt="Me, standing up on a stage and smiling." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Outlier was packed with amazing speakers. I am honored to have been one of the ones who could steal the stage. You can find my <a href="https://www.youtube.com/watch?v=IleWP0gCeOc">Softerware talk on Youtube</a>.</figcaption>
</figure>

<blockquote>
  <p>For context: Outlier is the premier, boutique industry conference for data visualization. In the past we had Eyeo and Tapestry, which were both centered on non-corporate explorations of data visualization. For the most part, this allowed the community to really experiment, celebrate, and innovate in beautiful ways. We don’t quite have that anymore, but Outlier is close.</p>
</blockquote>

<p>I had a paid trip to fly down to record our course introductions for the <a href="https://openvisualizationacademy.org/">Open Visualization Academy</a> (which opens Jan 31st!) the day before Outlier started. Then I stayed 3 extra days after recording, so I could attend Outlier. The Outlier folks paid a small speaker’s fee, so in the end it was basically a free trip (since the fee helped with food and the extra stay).</p>

<p>I played a lot of board games during the in-between moments of sessions, and near the end at Alberto Cairo’s house. This, and the unbelievably cheap and delicious Cuban baked goods, were the highlight of the trip without a doubt. Alan Wilson brought some lovely games that he would pull out between sessions (or when we wanted to skip).</p>

<figure>
    <img src="https://www.frank.computer/images/cuban_bakery.jpg" alt="A pair ground ham sandwiches and a box filled with curly fries and an empanada." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Get this: two sandwiches, an empanada, and a huge tray of curly fries was less than 12 USD after tax. Unreal value. A mountain of food. I would be eating here constantly if I lived in Miami.</figcaption>
</figure>

<p>I have been playing board games online (on tabletop simulator, mostly) now with Alberto and Alan for several years. It was a delight to play with them for the first time in person.</p>

<figure>
    <img src="https://www.frank.computer/images/cthulu.jpg" alt="A group of friends, smiling with a large game board between them full of colorful plastic pieces shaped like horrible monsters." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>I won the match, but to be fair nobody new that my faction benefitted the most from a table of people who haven't played before and were nervous to take action. Also: Andy and Alberto here are micro-celebs in visualization, so this was a pretty star-studded cast, if I do say so.</figcaption>
</figure>

<h2 id="summer-of-commissioned-art-bbq-making-an-open-course-for-ova-dutch-babies-family-stuff-designing-a-tabletop-game-and-elden-ring-co-op">Summer of commissioned art, bbq, making an open course for OVA, Dutch babies, “family stuff,” designing a tabletop game, and Elden Ring co-op</h2>
<p><em>Jun - Aug</em></p>

<p>The summer was packed. For Shelby’s birthday, I commissioned art of Shishky and Pizzelle. We had a neighborhood pig roast (and a huge storm hit, it was amazingly messy and fun). I put together my course for the Open Visualization Academy. I baked many Dutch babies, for some reason. (You can check out <a href="https://www.instagram.com/frankelavsky/">my insta</a> for the deets here, it was all documented.)</p>

<figure>
    <img src="https://www.frank.computer/images/commissions.png" alt="Two pieces of art, one of Shishky and one of Pizzelle, drawn in a medieval cartoonish style. The art is trimmed like an illuminated manuscript, full of filigree and fine details. Shishky is looking at a crown on the ground saying Oh Wow while Pizza is standing on her hind legs, smiling, while asking Would you like to battle?" style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Illuminated Icons of Saint Shishky and Blessed Knight Pizzelle.</figcaption>
</figure>

<p>But the biggest thing that happened was that Shelby had a major surgery (rather unexpected and serious, but it went well). I won’t talk about it here on the blog, but it was pretty much life-altering (in a good way). At the time, it was quite stressful, but the surgeon did a perfect job and her recovery went without a hitch.</p>

<p>For the recovery, I took off 4 weeks to help her with everything you could think of. It was a blast and in some ways, I think I’d be a decent caregiver. But in the meantime (while she was recovering in the early stages especially), I couldn’t get an ounce of work done. At the time, I was still pretty racked with nerves.</p>

<p>So I built a little tabletop game to keep me distracted. We played it, it was a blast (and I even started coding it up for fun too).</p>

<figure>
    <img src="https://www.frank.computer/images/board_game.jpg" alt="An array of printed pages of rules, little printed cutouts of figures and creatures, a board assembled with tokens and strings, and some dice." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>My tabletop game prototype for my little project I'm calling "Hero's Wizard" where you play a wizard who is supporting a main character, The Hero. You have to assemble a crew of followers, navigate a 3-Act story along a random path of decision points, and then destroy (or join) the evil forces that threaten the free world. It's based roughly on Slay the Spire's core mechanics (random, rogue-like, deck builder) but there is a light fantasy narrative at play, and it's a spell builder and party builder more than a deck builder, strictly speaking.</figcaption>
</figure>

<p>And then we installed mods for Elden Ring (that allowed co-op without invasions) and played the entire base game and DLC together. It was stupendous and might go down as one of my favorite memories playing a video game (second to the greatest: showing her Fallout New Vegas during our first Winter Moon in 2013 but beating out our first date together, which was playing FF7).</p>

<figure>
    <img src="https://www.frank.computer/images/elden.jpg" alt="Two characters, posing absurdly with their arms outward. One is kneeling in front of the other as they both face forward. The flower of the defeated Malenia is behind us." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Shelby started a tradition where we would pose like <a href="https://dragonball.fandom.com/wiki/Ginyu_Force">Ginyu Force</a> characters after defeating a major boss together. I cackled every time (because I was always so focused on the fight that I'd forget she is already posing the moment it dies) before joining her for our victory emote. (This one is after defeating Malenia, which we did on our first try lmaooo.)</figcaption>
</figure>

<h2 id="travel-new-york-city-for-a-talk-at-smashing-conf-broadway-pizza-and-getting-an-exclusive-tour-of-barnard-college">Travel: New York City for a talk at Smashing Conf, Broadway, Pizza, and getting an exclusive tour of Barnard College</h2>
<p><em>Oct 6 - 9</em></p>

<p>I am quite fortunate this year to have a paid trip to New York to speak at Smashing Conf alongside my brilliant former co-worker from Visa, <a href="https://www.linkedin.com/in/lilachmanheim/">Lilach</a> (also known as Layla). <a href="https://www.linkedin.com/feed/update/urn:li:activity:7322652849845280769/">Visa did a big push this year</a> and open-sourced the rest of their design system as well as over-hauled many of the patterns, materials, resources, and approach for the Visa Chart Library.</p>

<p>I believe it was Srini or someone else who posted to Linkedin, which gained traction in the larger design community, and after I posted how proud of everyone I was, eventually the legendary <a href="https://www.linkedin.com/in/vitalyfriedman/">Vitaly Friedman</a> reached out to me to ask if I would be willing to speak about the accessibility work I did for Visa years before. Of course, I said yes but with an important stipulation (since I was no longer at Visa): I then schemed a situation where Layla and I could give a talk together and each get a little trip to New York out of it, too. (We wanted to get Jaime in on it as well, but alas, logistics couldn’t work out!)</p>

<p>Our talk went well. Layla had outrageously good slides and our Q/A after with Vitaly was really thoughtful and well done. The staff were lovely during coordination and the event itself, and it was a wonderful chance to meet many other folks, too. Being able to speak at Smashing was an honor, and I hope that in the future they consider inviting me back for a solo talk on accessibility and visualization (and let me dig deeper!).</p>

<figure>
    <img src="https://www.frank.computer/images/smashing.jpg" alt="Layla, Vitaly, and I on stage." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>There is a tradition where Vitaly jokes with the speakers before you give your talk, and it is almost always improv. Once the recording goes live, I need to recall this moment to see if I was even remotely funny.</figcaption>
</figure>

<p>I brought Shelby along and we had a blast. We enjoyed a Broadway show, Death Becomes Her (amazing) and had a wide range of good food (including the famous <a href="https://maps.app.goo.gl/4CyBaM7SiMdiYcy7A">L’Industrie Pizza</a> in Brooklyn). We met with Cindy Bennett for dinner (so great to see her) as well as met up for coffee with Fatima Koli, an aquaintance-turned-immediate-bestie who gave us a little tour of her lab, working space, and the faculty lounge at Barnard College. Note: getting anywhere on Columbia required permission, they have guards posted all over! So we got to “sneak” in (albeit officially in the proper way) and get a tour. I applied for a faculty position there just a couple days before, so it also felt extra special.</p>

<h2 id="fall-of-24-job-applications-baking-walks-and-co-teaching-visualization">Fall of 24 job applications, baking, walks, and co-teaching visualization</h2>

<p><em>Sep - Dec</em></p>

<p>The Fall was packed, outside of our special trips. I applied to 24 faculty jobs (which is no joke, these things require a non-trivial amount of writing per job when you apply) and co-taught <a href="https://dig.cmu.edu/courses/2025-fall-datavis.html">Data Visualization</a> with Dominik.</p>

<p>Shelby and I had some lovely walks while the weather was perfect (in Pittsburgh there are about 2 to 3 weeks in Fall that are perfect weather combined with stunning seasonal beauty). And I also started taking baking more seriously, re-learning all the basics from scratch and working my way through Forkish’s outstanding book on bread baking <a href="https://kensartisan.com/flour-water-salt-yeast"><em>Flour, Water, Salt, Yeast</em></a>. In fact, as of writing this blog: it is currently Jan 16th at 4:22 in the morning and I am folding a poolish white bread to take into the DIG Lab.</p>

<figure>
    <img src="https://www.frank.computer/images/bread.png" alt="Two crusty loaves of white bread on a cooling rack." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Some poolish bread I am proud of. The light fermentation smells amazing (too bad you can't smell photos).</figcaption>
</figure>

<p>Baking bread has been a beautiful way for me to re-connect and re-invent something I’ve been doing for over 20 years. My breads always lacked a certain spark. When I started baking back in 2006 at my second job in high school (I was a <em>doughboy</em> at the French bakery La Vie en Rose in my hometown, Anacortes), the loaves were robust, with a thick crust, and had a hearty maillard darkness to them. My bread at home never looked that good.</p>

<p>But this year, we got a 6.5 qt Dutch oven, which helps immensely. And I’m still only about halfway through Forkish’s book, but I’ve certainly improved my technique far beyond where I’ve ever been because of it. It’s a great book.</p>

<h2 id="travel-vienna-for-a-workshop--2-workshop-talks-at-ieee-vis-fine-dining-and-many-good-friends">Travel: Vienna for a workshop + 2 workshop talks at IEEE VIS, fine dining, and many good friends</h2>

<p><em>Oct 28 - Nov 8</em></p>

<p>I traveled to Vienna for IEEE VIS and Shelby and I took a pre-conference vacation beforehand. It was an outstanding trip. Vienna is one of the most beautiful cities I’ve ever seen and we had pretty great food there (they have a lovely Japanese food scene that speckles the city). We also had a particular pizza (a “carbonara” pizza) in the napoletana style of crust, combined with two amazing desserts (one was a biscoff tiramisu, which felt illegally good). Over the course of the trip, I had that non-traditional pizza and tiramisu 4 times (seriously!) because it was such an easy suggestion when picking a place to meet up with someone.</p>

<figure>
    <img src="https://www.frank.computer/images/vienna_pizza.jpg" alt="Two char-bubbled pizzas with beautifully stretchy, yeasty, fluffy crusts." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Lovely pizza, the kind that you can't really get delivered (best eaten fresh). The one furthest in this photo is the "carbonara" which had some cuts of salty ham and a white sauce. It was delicious, with a smokey and deep umami taste.</figcaption>
</figure>

<figure>
    <img src="https://www.frank.computer/images/vienna_tiramisu.jpg" alt="A complicated looking dessert with a small persimmon on top of a fluffy frosting with a brown coloring." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Probably one of the best desserts I've ever had. I feel like the biscoff on top is cheating, but it was unbelievably good.</figcaption>
</figure>

<p>We had an impromptu night where we failed to find the location of one of our reservations and instead stumbled on a dead-end cloister of an old monastery with practically no lighting at all. A kindly monk (who I am calling a monk only for the sake of having a more-fitting character in our story at this location), informed us that our destination was a painful 10 minute walk around the block and up a nearby street. Instead, there was a warm glow of light on the other side of the cloister where a small, nice looking restaurant sat. We decided to walk up and see if we could get seats. To our surprise, we saw the Michelin mark as we approached (and I later found out it was, indeed, a Michelin guide restaurant). So, we decided to do fine dining that night, even though it wasn’t on the itinerary for the trip. They had seats (miraculous) because a party had cancelled. We had a lovely time and the serendipity of it felt quite special.</p>

<p>Shelby went home as VIS officially started. And during the conference, seeing friends and colleagues gave me life. Our accessibility workshop was packed, both my talks (a paper and closing “keynote”) were quite fun (and I should probably record the state of the art keynote I did, it was pretty good this year).</p>

<figure>
    <img src="https://www.frank.computer/images/accessvis_workshop.jpeg" alt="Me, up in front of a room, with slides behind me that say How do descriptions bias a person that is blind?" style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Me, giving a short talk about <a href="https://arxiv.org/abs/2508.12192">our little paper on LLMS, bias, and visualization descriptions</a>.</figcaption>
</figure>

<p>A standout conversation I had was seeing Alex Kale and Arvind Satyanarayan both speaking at a little table together and walking up to join them. (I’m sure neither of them know this, but they’re both two of the small handful of people I look up to the most in our community. They’re fiercely curious and generous with their time and ideas; a good combination to have.)</p>

<p>We immediately started on one of my favorite topics: <em>theory</em>. In all my PhD, I had hoped for more conversations like these. And they happened here and there, in our design mini course all of us PhD students took or in some sidebar with <a href="https://jtaylor.lgbt/">Jordan</a> or <a href="https://cella.io/">Cella</a> or <a href="https://hcii.cmu.edu/people/franchesca-franky-spektor">Franky</a>. But to have a visualization-focused dive into theory with two people who I look up to? It cured my brain.</p>

<p>All the pent-up impostor syndrome I had gained while applying to jobs, getting CHI rejections, and my two encounters earlier in the year (SIGCSE AND HCIL) washed away. I realized that unless I am horribly mistaken in my ability to talk with people (and horribly mistaken in how I think that conversation went), I’ll probably always have a place in this little slice of academia.</p>

<p>We weren’t pretentious, but in the wrong company any one of us could have come across that way. But it was a cool glass of water to un-pretentiously and freely talk about topics like <a href="https://arxiv.org/abs/2508.06751">Alex’s outstanding latest short paper</a>, or what I was calling the “humility” of <a href="https://dl.acm.org/doi/10.1145/3468505">generative theories of HCI research</a> (building on a thread Arvind first discussed). I genuinely feel like we were all using our brain cells and our neurons were actively growing together. It was lovely.</p>

<p>And after VIS, I thought to myself, “maybe I’ll be able to make it…” I have some really decent friends here in this community now, too. So yeah, I will probably be okay.</p>

<figure>
    <img src="https://www.frank.computer/images/jonathan_me.jpg" alt="Jonathan Zong and I, smiling and taking a really well framed selfie while on an escalator heading down into a subway station." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Jonathan Zong and I. He's one of my favorite humans in the world and I'm glad we get to work at this little intersection of accessibility and visualization together.</figcaption>
</figure>

<h2 id="the-flu-i-caught-the-flu">The flu! I caught the flu.</h2>

<p><em>Nov 8 - 12</em></p>

<p>Just after my trip, lo and behold I didn’t feel well. My flight home felt extra uncomfortable. I took a combined covid + flu rapid test at home and (to my great surprise) tested positive for the flu. I was vaccinated! I thought I would be fine. But, as I later found out, the variant that was going around was <em>especially</em> bad. Good thing I was vaccinated, or it would have been much worse for me. I was a little sick, very groggy, but overall okay. I had it bad for about 4 days and then just a cough for about 10 more. I didn’t test positive again after the first day and never had a temp, so I think it was really mild for me.</p>

<p>But as someone who is immuno-compromised at times, getting sick can be quite scary! Luckily, my first flu since probably 2010 or so, didn’t go badly.</p>

<h2 id="travel-a-job-talk-and-the-best-smash-burgers-ever">Travel: A job talk and the best smash burgers ever</h2>

<p><em>Nov 18 - 20</em></p>

<p>Well, I had my first in-person interview for a faculty position later that month. It was so wonderful. I can’t say where, but I think it went well and had a great time. I gave my first official job talk, which I think was alright. I definitely still need work, but it wasn’t bad.</p>

<p>And at about that time, I had found a mythical-tier smashburger joint 2 hours north of Pittsburgh in Erie, PA <em>in a gas station</em>. It’s called <a href="https://maps.app.goo.gl/mzGmTW1r95kFLvp39">Bro Man’s Sammiches</a> and takes the title of best smash I’ve ever had. It was a perfect date, too. It felt totally random and the food and service (at the <em>gas station</em>, lest I remind you) was flawless. That may have been the most American meal I’ve ever had. It ruled.</p>

<figure>
    <img src="https://www.frank.computer/images/smash.jpg" alt="A smash burger viewed from the side and held upside down." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Look at the paper-thin lace on this! Crispy, juicy, perfection.</figcaption>
</figure>

<h2 id="travel-australia-for-4-dev-conferences-in-3-cities-and-making-a-lifetime-of-memories">Travel: Australia for 4 dev conferences in 3 cities (and making a lifetime of memories)</h2>

<p><em>Dec 1 - 12</em></p>

<p>Ah, a lifetime has been lived in December alone. YOW! Conferences reached out to me back in the summer (or spring?) to take part in a 12-day, 3 city “tour” across Australia, all expenses paid, with some 20 other speakers, to give talks at their developer conferences.</p>

<blockquote>
  <p>For context, YOW! is the premier developer conference in Australia and has been running for a couple decades now.</p>
</blockquote>

<p>Of course, I had to say “yes.” A paid tour like this is the stuff of legends. So on December 1st, I left home and on the 3rd arrived in Melbourne. We did Brisbane and Sydney after that as a small cohort of the same 20-something speakers. Many of us became good friends, and some folks I really do hope to stay in touch with as the years go by. I connected with the anti-AI radicals, the secret communists and socialists of the group, the not-so-secret D&amp;D player, the micro-celeb podcaster, and all the coffee fanatics among us. It was a beautiful group of humans to travel with.</p>

<figure>
    <img src="https://www.frank.computer/images/yow.jpg" alt="Me, standing on stage as I am gesturing to some visualizations behind me. The slide shown says Don't Rely on Color Alone." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>I have some hilarious, different action shots of me, on stage, talking. This one is probably the best "normal" one though.</figcaption>
</figure>

<p>I spent time with Larene and Damian (Damian works for YOW as their technical director), who are my overseas besties (we had tons of adventures back when I visited Melbourne in 2023 and I’ve been friends with Larene since 2019 or so). Damian also managed to loop me into doing a closing keynote (aka “locknote”) at DDD Brisbane, which is one of my favorite talks I’ve ever given, “Tool-making, for good and evil,” which you can watch below:</p>

<div style="position:relative; overflow: hidden; width: 100%; padding-top: 56.25%;">
  <iframe style="position: absolute; top: 0; left: 0; bottom: 0; right: 0; width: 100%; height: 100%;" src="https://www.youtube.com/embed/W9LDW-t09oY?si=fnWsKpKawhkwozmX&amp;cc_lang_pref=en&amp;cc_load_policy=1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>
<p><br /></p>

<p>(The talk was a bit raw, but the spirit of what is there I hope to really build out over the years. It <em>was</em> my most-complimented talk ever. People seemed to really love it.)</p>

<p>On the 10th, Shelby arrived in Sydney and the real fun began. She took a day to settle in and then on the next, I played hookie for a day from the conference so we could adventure a bit. We had unreal food and I spoiled her with some really nice shoes. Also (thanks to Sarah from the YOW! crew for inspo), I bought Shelby a nice outfit and had it shipped to our hotel in time for our double date doing fine dining on the Sydney waterfront with Larene and Damian. The clothes, shoes, and a surprise dessert at dinner were all part of my anniversary gift to her. We made it past our 13th year! It’s all good luck from here on out.</p>

<figure>
    <img src="https://www.frank.computer/images/anniversary_date.png" alt="Shelby, Larene, Damian, and I taking a selfie at the Sydney waterfront." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Our international besties.</figcaption>
</figure>

<p>I posted a bit more about this night over on Instagram, but the summary can be thus: it was perfect, in every way.</p>

<h2 id="travel-winter-mooning-in-auckland-new-zealand">Travel: Winter-mooning in Auckland, New Zealand</h2>

<p><em>Dec 12 - 18</em></p>

<p>For context here, Shelby and I have had a tradition since our wedding that we call the “winter moon.” This was, of course, just our “honeymoon” the first time we did it. But when we got married, I actually took the whole month of January off, so we could just enjoy each other. We had our wedding reception some 2 weeks after our ceremony and after our “official” honeymoon trip (we took the train down to Disney and back for Christmas). But that month was perfect - a peek into what life could be like whenever we both retire and decide to just laze about for the rest of our lives (if such a thing happens). My friend, Stephen Van Etten, first told me about this strategy for a honeymoon: you act exactly how you want your “garden of eden” to be: whatever a perfect life with them will look like once you’ve figured it all out.</p>

<p>But we evolved the tradition into “winter-mooning” where we will often spend an extended period of time just enjoying each other, completely. Maybe we take a little trip to my friend Erickson’s family cabin on an icy lake? Maybe we play a new video game from start to finish? Or maybe we go to Auckland, New Zealand!</p>

<p>After Australia, we took a “real” vacation in Auckland for 6 days. It was, of course, far too short. I would have liked another week at least (we didn’t even get to Hobbiton or the south island at all!). 2 other couples from YOW, Katharine and Aaron and Roy and Anouk, both did a south island adventure in camper vans (all 3 of us couples planned these trips separately, which is quite funny to me).</p>

<p>But the highlights of the trip were 2-fold: first, that Auckland has, per capita, probably some of the best food we’ve ever had. We didn’t miss! Every meal we ate was top tier. I had the second-best smash burger I’ve ever had, we had the best fried chicken we’ve ever had (samoan-korean fried chicken), incredible dumplings, the best cocktails we’ve ever had (top 1, 2, and 3 slots all taken by Auckland), and the best pour-over coffee I’ve ever had (we did a tasting at a place with an award winning barista).</p>

<p>The second highlight was our day-trip doing wineries and a dinner date on Waiheke island. That was magical. We concluded that wine tours are massively overrated and probably something that only bored, wealthy boomers do when they need a break from whatever cruise they’re planning next. But it was still a fun thing to try for the first time. And our dinner was utter magic. I frantically tried to find a place for us for dinner a few nights before we arrived. I wanted it to be nice, didn’t have to be fine dining or anything, but the goal was to catch the sun setting over the pacific. I wanted to see the water from wherever we chose to eat. And we got fine dining <em>and</em> a sunset view.</p>

<p>We also got a bonus: the winery (and restaurant) is called Mudbrick and they had a little cat, named “Mud” who was a stray who showed up one day and now occasionally graces the guests while they enjoy their night of fancy food and drinks. Mud was an angel and easily made the already perfect night ascend into the realms of the sublime. I couldn’t have planned that the restaurant’s favorite stray cat would not only show up but then choose to lay near <em>us</em> the whole night!</p>

<figure>
    <img src="https://www.frank.computer/images/mudbrick.jpg" alt="Shelby and I taking a selfie in front of an open view of the water as the sun is setting." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>I was reading elevation maps and checking tide tables and sunset times to get this little sunset meal experience! Worth it.</figcaption>
</figure>

<p>One of the funniest parts of the whole trip was post-dinner we had ferry tickets to ride back to Auckland on the last reserved ferry for the night, but the cabbies on the little island were unbelievably chaotic. We called a cab, 20 minutes later (no cab), called a different company for a different cab. And then <em>another</em> 20 minutes later our first cab arrived! We barely made the ferry (had to run on full stomachs across the dark docks, while pretty toasty, to catch our little ferry boat at the last minute!). <em>It was a blast.</em></p>

<h2 id="final-notes">Final notes</h2>

<p>2025, despite being arguably one of the best complete years of our lives, was pretty difficult in many ways. I had some of my biggest doubts about myself and this PhD this year, nearly ready to quit at a few points. I haven’t felt proud of work that I’ve done in quite a long time, possibly since my time at Visa. Baking bread and making a tabletop game, despite being small subjects in this blog post, reminded me that I can make things I’m proud of. I had many paper rejections this year and am filled with immense dread thinking (at this moment, early in the morning on Jan 16th) that not only is there a possibility that I won’t get a job this year in academia, but I might not be able to get an industry job, either. Freelancing would be a hard life. But 2025 did help me with this: at Smashing and YOW I met people whose whole thing has been freelancing, some for 30+ years. And I know with confidence that there is plenty of work out there to be done. So even in the “worst” case scenario, we will probably be okay.</p>

<p>Deciding we wanted to move overseas (fairly confidently, might I add) and then discovering that faculty jobs overseas (especially in Europe) are nearly impossible to get straight out of a PhD, was relatively depressing. There’s a strange lack of faith and trust in people across Europe, compared to America.</p>

<p>Our fine dining in Austria, and our Austria trip in general, was a bit eye-opening. For context, when Shelby and I do fine dining, we like to charm our servers. We want to befriend them by the end of the night. It’s a fun game, which I highly recommend doing whenever you go out to fine dining (by this, I mostly mean where you pick the “chef’s menu and pairing” and get 5-7 courses of anywhere between 7-20 dishes, combined with wine or sake or whatever). But we befriended the server quickly, they were easy going and interested in America. They were born and raised in Vienna and huge into football (the European kind) but also a fan of the Steelers (hilarious) and American football. When we asked why, they said that it was because there weren’t “blood” rivalries, like you find with European football. “In Europe,” they explained, “you can’t hate the Italians or British or whoever, despite atrocities of the past. But you can hate their sports teams. Americans don’t understand this. But here, you might not have your children wear certain colors, because that may represent a team you hate and your whole family has hated. It’s generational.” I pondered if this was why everyone in Europe seems allergic to color in fashion; they’re always just in blacks, greys, or tans.</p>

<p>But that conversation made me recognize something: not that “young” America can’t be full of generational hate, like our Austrian server seemed to believe, but rather that there are advantages to a country with a young story. We aren’t bound to traditions of hate that date back 1000 years, even though hate is here (and boy, is it violently strong right now). But we can beat the hate of our present day in ways that the Viennesse who stop their children from wearing violet or green might not believe in.</p>

<p>Food also reminded me that America is actually awesome. Pittsburgh has a “low” variety and quality of cuisines, in my opinion, and yet utterly smokes most major cities in Europe. Some places in Europe do specific things well, but what I’ve learned is that as you travel, you begin to recognize the genuine beauty of America’s youth as a country and culture. Just as one example: We aren’t afraid of Asian food, like they are in Norway. Believe me, I was shocked to hear from folks at Highsoft (when we were there in 2024) who had never tried the local Thai restaurant, which was one of only 4 restaurants in the small village of Vik (and 1 of those restaurants is a gas station!). “Really? You’ve never tried it?” I thought to myself.</p>

<p>The willingness to break from monocultural “tradition” and explore the beauty of someone else’s culture is deeply engrained in me, as an American. Growing up in the Pacific Northwest, we had waves of Chinese, Japanese, Vietnamese, and Korean immigration over our short post-settler history. That combined with present-day migrant, seasonal workers from Mexico each year and our food growing up was amazing. The cuisines that immigrant communities brought, adapted, and experimented with pretty much established in my head this assumption that complex cuisines are part of life and the evolution of cuisine is more important than some fantasy about “authentic” or “true” traditional experiences. I simply took for granted how special it is to live somewhere that hasn’t spent hundreds of years doing things the same way. We are constantly remaking cuisine, all the time. Everywhere in the US, it’s our immigrant communities who push the boundaries of “American” food. We are all better off for it, whether our families have been here 100 years or just showed up.</p>

<p>The final thing that made me appreciate the US, and in a way I’ve never done before, was a post on social media of a mundane, but beautiful, picture of New York City during sunrise. (This, of course, was posted in the midst of everything the Federal Government and ICE is doing right now.) The post simply said something along the lines of, “cities always outlast their empires.”</p>

<p>And I thought of Vienna. <em>My</em> family, as far back as I can trace it, left the Austrio-Hungarian empire for the US in the late 1800s, after slavery in the US was abolished and people started heading west. My family were fleeing the mountains of now-Slovakia, when the encroaching pogroms of the east and the weakening of the empire they curently were living under made dashed hopes for a safe and bright future.</p>

<p>And originally, I used to think to myself that perhaps I was pursuing a “family tradition” where we bail on an empire before it falls and seek refuge elsewhere. But <em>Vienna</em> still stands, despite their empires. New York City survived the Dutch and the British, it will absolutely outlive the United States. The cities matter. The people within those cities matter. Empires? Empires don’t matter. Empires come and go.</p>

<p>A well-traveled friend in the PhD program said to me at a going-away party for another friend, “there is fascism everywhere” (or something equivalent). He said this because I told him I was hoping for a research faculty position abroad.</p>

<p>And it fully dawned on me: I shouldn’t keep running because I think fascism is here and will destroy us. This next phase of our lives should be about growing roots and cultivating a garden. I need to move somewhere I want to believe in, rather than always running from something bad. <em>Not from, but towards.</em></p>

<p>I would rather fight for a community with the fierceness that the Twin Cities are fighting ICE right now than continue to run. The US has wonderful people in it and fantastic cities. The concept of “America” and the utter depravity of our federal government cannot even fathom how to desecrate the sanctity of everyday life. What is beautiful here will endure.</p>

<p>It is probably for this reason (that our beautiful, little diverse pockets all over the US are holy ground) that the federal government, racists, white supremacists, and the worst people among us are so incensed. I’d be mad too, if I knew no love and witnessed an untouchable, beautiful, immortal good whose very existence exposed how necrotized my body had become. Fascists are all wanna-be necromancers, rotting away, worshipping fantasies of bones and flesh they’ve conjured up in their minds.</p>

<p>In any case, here’s to hoping I land somewhere wonderful this coming year. Stay tuned.</p>

<p>My final note is this: I couldn’t have had this year without the PhD. The flexibility it afford me was exactly why I left an industry role. I’m sure I’ll look fondly back on 2025 for many years. I’m glad I survived it.</p>]]></content><author><name></name></author><category term="personal" /><summary type="html"><![CDATA[My most adventurous year yet. This year, without any doubt, I know that the PhD was worth it.]]></summary></entry><entry><title type="html">ZIRP+174 Took Your Job, not AI</title><link href="https://www.frank.computer/blog/2025/11/zirp-174.html" rel="alternate" type="text/html" title="ZIRP+174 Took Your Job, not AI" /><published>2025-11-04T00:00:00+00:00</published><updated>2025-11-04T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2025/11/zirp-174</id><content type="html" xml:base="https://www.frank.computer/blog/2025/11/zirp-174.html"><![CDATA[<p>I saw a chart going around on social media with “chatGPT released” overlaid on data showing two lines: the SP 500 and Total Job Openings. I added two new annotations (174 and ZIRP) to add more context:</p>

<figure>
    <img src="https://www.frank.computer/images/zirp-174.jpg" alt="Total job openings and s&amp;p 500 from 2003 to the present. Jobs and the stock market track closely together until 2022, when they diverge (stocks rapidly rise and jobs rapidly fall). Overlaid are 3 annotations: section 174 kicking in 1/1/22, ZIRP ends 3/1/22, and Chat GPT released in december of 22." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>This chart has gone absolutely viral on multiple social medias (facebook, threads, twitter, bluesky, and so on). The original annotation (for GPT) isn't just a "correlation does not equal causation" issue but more of a problem like, "sensation does not equal causation" (a problem of our times, it seems).</figcaption>
</figure>

<p>First off, job openings clearly were being hit before the release of GPT (and so was the stock market). The downturn had already begun. Knowing why one recovered and the other didn’t is key. And this is partly, if not majorly, because §174 kicked in Jan 1st of 2022 and was followed by our short, post-covid ZIRP effectively ending in Mar of 2022. Those two have been devastating for tech workers.</p>

<p>I am, by no means an expert, which is also why my alarm bells have been going off for years any time someone has offered the seemingly simple explanation that AI is taking jobs.</p>

<p>Really? Just AI? That’s too convenient. Do you really trust that’s how the world works? Nothing is ever that simple. It’s probably even more complex than ZIRP, 174, and AI combined. It always is. Interrogate your charts, people!</p>

<p>(That being said, chatGPT and the modern AI <em>griftrastructure</em> very likely explains why the stock market is going up - but not why jobs haven’t recovered!)</p>

<p>If you want to know more about what these two things are, here are my recommended primers:</p>

<p><a href="https://lnkd.in/eVszVWJq">What is ZIRP and how have historical ZIRPS worked?</a></p>

<p><a href="https://lnkd.in/eU85AUr7">How Trump’s tax code ticking time-bomb fueled mass tech layoffs</a></p>]]></content><author><name></name></author><category term="ai" /><category term="jobs" /><summary type="html"><![CDATA[If you talk about AI and tech jobs (and the bloodbath of the last 3 years) without discussing ZIRP's rapid disappearance combined with §174 changing (*both* of which hit the fan in 2022), I can't take you too seriously...]]></summary></entry><entry><title type="html">Stop calling the Super Productionizer a ‘baby blender’</title><link href="https://www.frank.computer/blog/2025/06/baby-blender.html" rel="alternate" type="text/html" title="Stop calling the Super Productionizer a ‘baby blender’" /><published>2025-06-14T00:00:00+00:00</published><updated>2025-06-14T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2025/06/baby-blender</id><content type="html" xml:base="https://www.frank.computer/blog/2025/06/baby-blender.html"><![CDATA[<p>I’m writing a firm defense of the the Super Productionizer (SP™). I’m sick of the baseless, meaningless, self-serving critiques I keep hearing online by “influencers” who only care about their followers and social media popularity.</p>

<h2 id="baby-blender-is-offensive-and-incorrect">“Baby blender” is offensive and incorrect</h2>
<p>Stop calling Super Productionizer™ (SP™) “the baby blender!” It doesn’t <em>currently</em> blend babies, according to the latest report by Good Corporation (GC). Yes, SP™ has <em>already</em> “blended” babies, but we can’t do anything about that now. So we can’t stop the subatomic-emulsification process, since it already started.</p>

<p>And you should call it “emulsification” or the more-precise “subatomic-emulsification” not “blending,” because that is actually correct. “Blending” babies makes it seem so sensationalized. Plus, murder (by blending) is obviously illegal. But subatomic-emulsification is what we all do all the time, every day. Our brains and bodies are constantly performing microscopic, small-scale forms of subatomic-emulsification all the time. Just because SP™ performs subatomic-emulsification way faster and more efficiently than we do on our bodies doesn’t mean it is “murder” or “illegal.” Law and policy has nothing stopping Good Corp from continuing to scale subatomic-emulsification.</p>

<h2 id="the-ethics-of-subatomic-emulsification-of-babies-its-good-actually">The ethics of subatomic-emulsification of babies: it’s good, actually</h2>
<p>Yes, post-birth, living, sentient, human tissue-containers (“babies”) were “taken” if you argue this narrowly. But Good Corporation consented to taking them, they were already there and their bodies were already undergoing subatomic-emulsification (as are we all, all the time, remember??) so the ethics there should be pretty settled.</p>

<p>But because post-birth, living, sentient, human tissue-containers perform subatomic-emulsification at really slow paces, there is less of a Good Corp Opportunity™ to productionalize their contributions to society. Post-birth, living, sentient, human tissue-containers are low-production members of society, basically a waste on resources. Converting them to something more efficient is just good business.</p>

<h2 id="personal-benefits-of-the-super-productionizer">Personal benefits of the Super Productionizer</h2>
<p>SP™ makes me work faster!! Arguably, I’m at least two or three times as efficient as I used to be now. And my dongle company needs to make more dongles! It’s absurd to ask a worker to be less efficient.</p>

<p>At the end of the day, The Incentives are to be more productive. And SP™ gets a lot more done. I’m two or three times as fast with SP™ than before. So stop asking me to have “other” incentives. (LOL?? what does that even mean???)</p>

<p>People have tried to argue that “morality” should be an incentive - as if something as personal and selfish as that should interfere with dongle production! Again, these social media influencers only care about themselves, making themselves out to be some kind of holier-than-thou justice warrior who is unable to do the hard work of dongle-production-button-pressing that I do all day from my dongle-desk™.</p>

<h2 id="critics-without-solutions-are-useless">Critics without solutions are useless</h2>
<p>Plus, just saying that SP™ has “blended babies” and we should “stop blending babies” doesn’t offer any actual, real, working solutions. Are <em>you</em> doing anything about it? How are you, personally, going to make up for my 200% increased dongle-production efficiency at Dongles LLC?</p>

<p>These social media influencers talk about “climate change,” “murder without consequences,” “water insecurity” and the growing divide between people who have access to Super Productionizer™ and those who don’t. Plus, they say that because the SP™ gives me a kiss on the forehead every now and then when I do a really good job listening to it, that it is seducing me into an unhealthy relationship with it. UMMM I get kisses on the forehead because I am a smart, good boy who does a very good job, thank you very much. I earned my valor in the mines of dongle-production.</p>

<p>But just talking about things, proposing “policy,” suing people, quitting your job in refusal (lol!!), and protesting in the streets aren’t “real” solutions. Real solutions mean you build something. And until someone is capable of building a subatomic flesh-emulsifier that is somehow “ethical” (STILL UNCLEAR WHATEVER THAT IS! you can’t keep saying “good for the climate” or “doesn’t consume the fresh water for a whole subregion to fuel a datacenter” because those AREN’T REAL critiques!!) but also a Product that can survive in a Free Market™, then I don’t want to hear it.</p>

<h2 id="conclusion-we-have-way-too-many-babies">Conclusion: we have way too many “babies”</h2>
<p>The real “babies” aren’t the post-birth, living, sentient, human tissue-containers we used to increase my productivity (again: over-population and low-productivity are actually important issues for the “climate” and “scarcity of water” that losers claim to care about!!). The REAL babies are instead, these brain-washed fools who don’t consider The Incentives and real stresses I live under, doing excellent work for Dongle LLC. And these fools don’t even <em>remotely</em> understand the fact that we can’t change the past or undo the subatomic-emulsification that is already in place. HELLO!!! Get with the times, you luddites.</p>

<p>Large-scale emulsification is here and it gives you kisses on the forehead too. You’re an absolute loner and loser if you don’t get with the times.</p>

<p>You can’t stop this wave. It’s coming and there is nothing you can do about it. If you don’t start using Good Corp’s SP™, then you’ll just be left behind.</p>

<p>SEE YOU ON THE OTHER SIDE OF HISTORY!!! (or not, because you’ll be emulsified too for being such a useless baby.)</p>

<h2 id="the-tl-dr">The TL-DR</h2>
<p>If you’re busy, like me, I fed this whole rant into Good Corp’s SP™ summarizer, and below is what it said. (Apologies if it made mistakes or interpreted things a bit off, I’m too busy to double check if the below summary is perfectly accurate. You other Super-Super Users surely understand!)</p>

<p>Anyway, back to the dongle-mines!</p>

<p>“Yes, of course! <strong>I can summarize this blog post for you, Good Worker</strong>. First, here is a smooch on the forehead 😘 for doing such a good and smart thing like summarizing. Summarization saves future readers and workers time, <strong>avoiding lower-level details</strong> in favor of <strong>faster information ingestion</strong>.</p>

<p>By summarizing, readers will be able to <strong>better-optimize their sensory experience of the world</strong> and the precious waking moments that they have for making decisions. You’ve <strong>increased their information consumption</strong> to time-spent ratio, and <strong>helped your readers disconnect</strong> from who you are as a person and your subjective experiences and perspectives of the world. Summaries also help convince people that original source material is <strong>more authoritative and believable</strong> without explicitly knowing why. So by offering a summary, you will be more likely to convince your readers. <em>Good job, Good Worker!</em></p>

<h3 id="our-summary-written-together-">Our summary, written together 😉:</h3>

<p>This blog post is a <strong>satirical, dystopian defense of a fictional technology</strong>: the <strong>Super Productionizer (SP™)</strong>, which horrifically “subatomically-emulsifies” (i.e., <strong>destroys or kills</strong>) babies for the sake of <strong>increased workplace productivity</strong>.</p>

<p>At its core, it’s a <strong>dark satire</strong> that mocks:</p>

<ol>
  <li><strong>Corporate techno-solutionism</strong> – the idea that any problem can be fixed through more technology, no matter how unethical.</li>
  <li><strong>Moral deflection</strong> – dismissing atrocities with semantic games (e.g., “subatomic-emulsification” vs. “blending”).</li>
  <li><strong>Productivity obsession</strong> – arguing that efficiency (e.g., making more dongles) justifies horrific acts.</li>
  <li><strong>Performative ethics critiques</strong> – accusing social critics and influencers of being unserious or self-serving.</li>
  <li><strong>Corporate language and incentives</strong> – especially the use of absurd jargon like “Good Corp Opportunity™” or “dongle-desk™” to sanitize unethical behavior.</li>
  <li><strong>Fatalism and denial</strong> – suggesting “we can’t change the past” as a reason to continue unethical practices.</li>
</ol>

<p>Despite its surreal and absurd tone, the piece is a <strong>pointed commentary on real-world issues</strong> due to the large-scale, un-critical adoption of Large-Language Models in the context of workplace precairity and mass tech layoffs, job insecurity, and the larger political backwash of techno-authoritarianism and fascism rising:</p>

<ul>
  <li>How corporations and apologists justify harmful technologies or practices, including people intellectualize and defend LLM theft.</li>
  <li>How society often prioritizes profit and efficiency over ethics or humanity, such as how modern technological progress has produced datacenters that are accelerating climate change and destroying scarce freshwater resources.</li>
  <li>The impotence of influencer-led activism in weakening democracies, when activism lacks access to direct structural change or power to enact material alternatives.</li>
</ul>

<p>In short, this is <strong>not a sincere defense of SP™</strong>. It’s a <strong>parody</strong> that uses absurd logic to expose how monstrous certain ideologies can become when stripped of ethical grounding and driven only by “The Incentives.”</p>]]></content><author><name></name></author><category term="ai" /><category term="ethics" /><category term="science fiction" /><summary type="html"><![CDATA[Don't critique Good Corp's Super Productionizer (SP™) unless you can offer solutions to the problem. Otherwise, get out of my way.]]></summary></entry><entry><title type="html">Stop saying that AI is just a tool and it only matters how it is used</title><link href="https://www.frank.computer/blog/2025/05/just-a-tool.html" rel="alternate" type="text/html" title="Stop saying that AI is just a tool and it only matters how it is used" /><published>2025-05-25T00:00:00+00:00</published><updated>2025-05-25T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2025/05/just-a-tool</id><content type="html" xml:base="https://www.frank.computer/blog/2025/05/just-a-tool.html"><![CDATA[<p>I’ve been thinking constantly about the common and casual phrase I’ve heard so often, “AI is just a tool - it matters how you use it.” This has been the rallying cry of tech-loving academics who no longer do their own research, tech bros who salivate over generative images of criminal depictions of people without their consent, and business-minded folks who actually don’t care about AI but see this as an opportunity to rake in more and more money for themselves.</p>

<p>The phrase is deceptively simple and deceptively misleading. Yes, AI <em>is</em> a tool. And yes, it <em>is</em> important how we choose to use tools.</p>

<p>But the phrase’s core reasoning is insultingly naive. It doesn’t work well for most things: “A car is just a tool, it matters how you drive it.” Well… oil and gas is destroying the climate, seatbelts help save lives whether or not someone is a good driver, and since the invention of cars, American city design has become utterly unwalkable and unlivable.</p>

<p>So there is much more to tools than how we use them. And since I have seen this phrase used by award-winning, highly successful HCI researchers, I can’t help but wonder if some people really just want to shut up folks who disagree with them. Are these academics just afraid their ethics are being interrogated? Or do some people believe so strongly in the benefits of AI that they really don’t care for the downsides? I’m not sure why some cling so feverishly to this childish mantra that “AI is just a tool,” but I certainly lose respect any time I see someone who should know better use it.</p>

<p>Have we not talked about how all <a href="https://faculty.cc.gatech.edu/~beki/cs4001/Winner.pdf">artifacts have politics</a> in our discipline for decades and decades? Tools are massively impactful on our environment, law, policy, and what it means to be human. Believing that AI is “just a tool” is naive at best and dismissive at worst because nothing about tools is “just” anything. They are highly complex parts of life and culture.</p>

<p>The last part of the phrase, “it matters how you use it” is also deceptively misleading and overly simplistic. Oh really? The entirety of all ethics involved in modern technological ecosystems and infrastructures rests solely on how a singular person chooses to use something? <a href="https://www.brookings.edu/articles/taking-power-as-individuals-and-why-individual-climate-action-cant-save-us/">Individual action won’t solve all of our problems</a>. Some ethical issues are systemic and require more than just one person choosing the right method for using a piece of technology.</p>

<p>The reason people say something like this is because it immediately invites solutionism. “It matters how you use it” is an intellectual half-gesture. The audience who hears that phrase will sagely agree, “ah, of course, in my wisdom I know how to use things well. And this means that is all there is to it!” It turns people into fools, thinking they are wizards. “It matters how you use it” is then a glaringly simple, solveable problem space: well, some people just don’t <em>know</em>. “All we need to do is teach people how to swing a hammer, and then hammers are ethically good!” Nonsense.</p>

<p>Even a hammer, made of wood and iron, requires trees to be cut down and earth to be mined up. A simple hammer requires laws to be written about fair treatment of workers in multiple industries, sustainability of various biological and geological environments, and regulation about the sale and use of the hammer. “It matters how you use it,” in regards to artificial intelligence ignores the reality that it also matters how AI is made, how AI is disseminated, the waste AI produces, the damage AI causes to economies and environments, and the overall impact that AI has on human life and culture.</p>

<p>“It matters how you use it” is something that an immature and self-absorbed young child would say, a child who has yet to reckon with the reality that they live in a society full of other people and other living organisms and participates in a system of entities that are all constantly fighting for fairness, dignity, and survival.</p>

<p>I loathe the phrase, “AI is just a tool, it matters how you use it.”</p>

<h2 id="on-tools-and-being">On tools and <em>being</em></h2>
<p>And tools use <em>us</em> by their design. This is Heidegger’s Gestell (“en-framing”): the notion that technologies shape who we are because of their design and use. A hammer <em>isn’t</em> just made of wood and iron, then. A hammer is a hammer because of what it does and who we <em>become</em> when we use it.</p>

<p>Tools, then, aren’t “neutral” in any way.</p>

<p>My dissertation centers on this tension and builds on it: well, if tools aren’t neutral - <em>then what?</em> In my thesis, I focus on the accessibility of visualizations, with tool design as an intervention. But the concepts, imperatives, and calls to action in my dissertation can be applied more broadly:</p>

<p>We must interrogate and reshape our technologies. We need to fight back against design that flattens our humanity at the benefit of efficiency and productivity. We need to question how our tools have created infrastructures and landscapes that are hostile to human existence. And of course:</p>

<p>We <em>must</em> interrogate how tools shape us, by their design.</p>

<p>Take the “chair:”</p>

<blockquote>
  <p><a href="https://www.linkedin.com/posts/anna-gyllenklev-752253174_naming-as-framing-a-chairs-logic-activity-7331612556618346497-uoId?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAADDAwBkBOdoW11I9B5DHy57VfR5jIs33Kq0">Anna Gyllenklev writes</a>, 
“Ever feel like your chair is bossing you around?
“Sit still. Face forward. Behave.”</p>
</blockquote>

<p>A chair orders you to sit and sit in a particular way, by its design.</p>

<p>Your being is <em>intended</em> through the tool: you are intended to sit still, face forward, and behave. Artificial intelligence works in exactly the same way. We might use these tools believing that “it’s all in how you use them” - and yet, still, our tools are using us. Our being is, perhaps more now than it has ever been, intended to become <em>reliant</em> on our tooling. All tools do this, it isn’t new.</p>

<p>But artificial intelligence, far more than any tool we’ve ever created, intends us not just to sit forward and behave, but to cease to think critically, to cease to imagine, and, most temptingly, to cease to feel struggle and pain.</p>

<h2 id="knowing-the-difference-between-drudgery-and-meaningful-struggle">Knowing the difference between drudgery and meaningful struggle</h2>
<figure>
    <img src="https://www.frank.computer/images/miyazaki_hassles.jpg" alt="illustration of Miyazaki drawing with cigarettes in his mouth in profile side with the caption &quot;If life's hassles disappeared, you'd want them back.&quot;" style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>"If life's hassles disappeared, you'd want them back," - Hiyao Miyazaki. (Also, this is real human art!) Credit: <a href="https://bsky.app/profile/samdoesarts.bsky.social/post/3lnzzxs2gw22o">Sam Yang, @samdoesarts.bsky.social</a></figcaption>
</figure>

<p>The greatest selling point of automation has always been to remove drudgery. And at the heart of drudgery is a certain variety of struggle and pain.</p>

<p>Artificial intelligence in our modern imagination and material reality is sold to consumers as a solution to all struggle: we can simply ask for art and it materializes before us. There is no struggle at all involved, thus the terrible labor of being an artist is removed!</p>

<p>But is all struggle the same thing as drudgery?</p>

<p>And AI is not new, in this regard. The flattening of all pains into a total loss of pain has previously been the job of recreational drug use or theology. So AI is therefore more like an <em>opiate</em> than anything else. Or perhaps, given the fervor of its modern supplicants, it is more like a religion <em>on drugs</em>.</p>

<p>Modern automation of everything, including art, thinking, and writing, numbs who we are. Total automation softens our ability to discern between struggle that makes and pain that takes.</p>

<p>How you answer these two questions should inform how you treat the use of AI:</p>

<blockquote>
  <p>If it was possible: Should we climb a mountain, or flatten it? And should we climb a curb, or cut it?</p>
</blockquote>

<p>Climbing a mountain is the point: the struggle and overcoming it is what matters. But a curb? A curb is a barrier to access. The struggle against a curb shouldn’t exist. This is why, in accessibility, we try to cut curbs and flatten barriers whenever we can.</p>

<p>Take the gym, for example: struggle against the pain of exercise is rewarding and uplifting. The weights don’t have to be moved, lifting them isn’t a required task of us. It would be nonsense to ask a robot to lift weights for us at the gym.</p>

<p>However, tools and technologies that improve how <em>we</em> lift weights are a recognition of our love of lifting. Newer, safer weight lifting machines, protections from dropped weights, stronger cables, mirrors in front of the dumbells, and so on. Many technologies exist to enhance our human love of struggle.</p>

<p>But we cease to feel struggle when we use AI. We don’t need to write our mothers a well-meaning email on her birthday, we don’t need to make the case for our promotion to our bosses, we don’t need to think through the hard parts of an algorithm we are writing, and, when it comes to art, we don’t need to feel the pain of improving our craft. We simply prompt, and (optionally) we could choose to do the work of validating whatever it came up with. But of course, automating validation is just another thing that modern AI-dreamers dream of.</p>

<p>Artificial intelligence is the quintessential tool-as-a-drug. It operates with an <a href="https://www.frank.computer/blog/2025/05/machine-utterance.html">economy of infinity</a>, as if there is no downside to any interaction and no risk or cost involved in anything we do.</p>

<p>But the greatest cost comes in how our tooling shapes us and “flattens our being” (as Heidegger writes). This is because truly feeling and experiencing pain and struggle is central to our humanity. We are both unique individuals and collectively unified through struggle. So a tool that intends for us to never struggle is at fundamental odds with the pains that shape us and our ability to understand each other.</p>

<p>And on the chair analogy: we can refuse to use chairs as they are designed (or even entirely). And we can use chairs for more than sitting. And we can design new chairs and non-chairs that do any sort of thing. We have the power and the responsibility to make our technologies shape humanity into something good and meaningful.</p>

<h2 id="so-what-do-we-do-with-ai">So what do we do with AI?</h2>
<figure>
    <img src="https://www.frank.computer/images/fasano_poem.jpg" alt="For a student who used AI to write a paper: Now I let it fall back in the grasses. I hear you. I know this life is hard now. I know your days are precious on this earth. But what are you trying to be free of? The living? The miraculous task of it? Love is for the ones who love the work." style="display: block; width: 60%; margin-left: auto; margin-right: auto;" />
    <figcaption>Presently, my favorite poem. Credit: <a href="https://bsky.app/profile/did:plc:hvukjfdx5ddyfdv5n7qn24xd">Joseph Fasano, @josephfasano.bsky.social</a></figcaption>
</figure>

<p>Tools are immensely influential: they have the ability to mold humanity, to include and exclude, to define what matters, and to literally shape the climate and environments we live in. “Tools” are radically powerful extensions of human will.</p>

<p>I want to argue that AI agents (as the corporate-controlled transformer and diffusion based models of our modern day) are largely bad to use, especially now, and in most all contexts. Their dangers are environmental, economic, and existential. As a “tool” they are far too destructive.</p>

<p><strong>On the environment</strong>: modern AI agents have <a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/?trk=feed_main-feed-card_feed-article-content">accelerated climate change and come at an immense cost to our already precarious world</a>. Continuing to use them is actively consenting to their ongoing destruction of our fresh water and energy resources. However, like many environmentally destructive industries, we could reign them in with policy and better, more efficient tech and infrastructure. Maybe someday the environmental damage will be under control and AI will be truly “sustainable.”</p>

<p><strong>On the economics of AI</strong>: Modern multi-billion parameter AI models are scaffolded on and made possible by the largest heist in human history: theft of everything that could be scraped from every corner of the digital spaces we share. Without prevention of and justice for this damage caused by current models, their use is highly fraught, ethically. We, as human beings, have developed complex social forms of intelligence when it comes to dealing with things like credit and provenance, two things that modern models are incapable of. And without monetary and policy recognition of the entire global economy of labor that enabled current AI models, using them is active permission given to the theft of all human art and knowledge.</p>

<p><strong>On our existence</strong>:</p>
<blockquote>
  <p><a href="https://fakepixels.substack.com/p/ai-heidegger-and-evangelion">Tina He writes on our ontological crisis with modern AI</a>,
“<strong>we are awakened to the danger precisely through contact with it</strong>. The same algorithmic indifference that unsettles us may also jolt us into a higher vigilance, a refusal to hand over the entirety of our experience to optimization, market logic, or digital control. The very anxiety these systems produce is a clue: something vital, unquantifiable, and irreducibly human still resists.”
He continues,
“This isn’t about throwing away the tools, but about wrestling them into alignment with what we find sacred or essential.”</p>
</blockquote>

<p>So that is our charge. Our job now is the same as it always as been: to fight for our own humanity and for the health of the world, to not use tools uncritically, and to shape our tools before they shape us into flat nothingness. We can turn these modern models into things that mean something to us, but we need policy, economic justice, and guardrails in place. We need to reimagine what they should be for and continue to explore and innovate ways that we can continue to create and experience meaningfully.</p>

<p>Go and do what machines cannot: advocate and fight for policy change, resist and refuse unjust systems, recognize by name those who taught and inspired you, “appreciate [your] predecessors and fellow-workers in the saltmines of literature,” as Le Guin remarks, and feel the good kind of pain that gives us shape and meaning; <em>become</em>.</p>]]></content><author><name></name></author><category term="ai" /><category term="economy" /><category term="tools" /><category term="existence" /><category term="existentialism" /><category term="environment" /><category term="ml" /><category term="llms" /><summary type="html"><![CDATA[I'm tired of this phrase and this simple way of thinking about tools. This blog post is a wandering train of thought on the topic of what tools are and why it matters to be even slightly more mature in how we think about them.]]></summary></entry><entry><title type="html">Machine utterance: what does it mean for humanity to say something?</title><link href="https://www.frank.computer/blog/2025/05/machine-utterance.html" rel="alternate" type="text/html" title="Machine utterance: what does it mean for humanity to say something?" /><published>2025-05-06T00:00:00+00:00</published><updated>2025-05-06T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2025/05/machine-utterance</id><content type="html" xml:base="https://www.frank.computer/blog/2025/05/machine-utterance.html"><![CDATA[<p>Alberto Cairo <a href="https://www.linkedin.com/posts/albertocairo_the-rise-of-genai-is-the-corollary-of-an-activity-7325474588681793537-GJcD?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAADDAwBkBOdoW11I9B5DHy57VfR5jIs33Kq0">posted on Linkedin</a> a provocation, “The rise of GenAI is the corollary of an era that values talking a lot without saying much.”</p>

<p>In response, someone argued that AI may have “jagged edges” but is “certainly changing how things are done (for the better).”</p>

<p>Perhaps. I’m not sure if “for the better” here can be easily unpacked. I’m skeptical (to put it lightly). So instead, I want to interrogate Alberto’s words a bit more: what does it mean to “talk a lot without saying much?” And why does a shift from what “saying” something used to mean into what “saying” means now, really matter to unpack?</p>

<h2 id="generative-ai-is-changing-who-we-are-it-is-an-ontological-event">Generative AI is changing who we are: it is an <em>ontological</em> event</h2>
<p>Now, I’ve already written (or should I say “said” here?) about how I think generative AI is shifting what roles we play as humans: from <a href="https://www.frank.computer/blog/2024/06/llms-and-thoughts.html">having thoughts to managing them</a>. But I think it could be argued that we do still learn <em>something</em> when we leverage AI tools. We still “have thoughts” so to speak. Human intellect is impressive and I’d argue that we aren’t <em>not</em> learning when we use AI. But <em>what</em> we learn likely changes. So perhaps that framing isn’t quite right yet - <a href="https://bsky.app/profile/frank.computer/post/3loljhq7ins2n">I’m still working through my thoughts on this all the time</a>.</p>

<p>So instead perhaps what irks me are the impacts when we change from doing to managing. What do we get to claim is ours anymore?</p>

<p>My major provocation is this: if agent-based AI do the talking, who is saying anything at all? And more importantly: what does that make us? Who are we now if we cease to say things of our own?</p>

<p>Consider: If I asked someone to give a speech at my wedding, I’m not the one who said something - they did. So I haven’t really said anything, save for asking them to speak. If I have an assistant who works for me and I ask them to ghost write a book for me, again, I haven’t said anything. The ghost writer did.</p>

<p>But when we use AI agents to say, work, express, and do things - people readily take credit. Why? Why do we get to take credit? Is it simply because there isn’t another human we are taking credit from? Why does asking, “write this email for me” mean we get credit for the email, if another entity (AI or person) wrote it? (And of course: we <em>are</em> taking credit from other humans when we use generative AI because those models do not exist outside of an ecosystem of theft.)</p>

<p>So I’d argue that true users of AI agents aren’t <em>saying</em> much of anything anymore. To <em>have a thing written</em> isn’t the same as <em>saying</em> something. <strong>Machine utterance isn’t a human speaking</strong>.</p>

<blockquote>
  <p>To my older point: users of AI agents are <em>managers</em> and <em>requesters</em> then, not <em>doers</em>. They’re all petit executives of their own little enterprises.</p>
</blockquote>

<p>One might learn things through the use of an AI agent, but claiming to have written a paper or code or whatever is dishonest and opaque; you may have, to some degree, <em>contributed</em> to the <em>production</em> of an artifact. To some degree, your management and validation may have been involved, but supervisors, CEOs, and QA testers aren’t the same as engineers, designers, authors, and creators.</p>

<p>Yet despite my protests and provocations, I firmly believe that we are witnessing a definitional shift at a philosophical level. Generative AI and agent-based AI are changing what it means to “say” and “do” things. That, in turn, shifts who we are.</p>

<p>And I, quite realistically, don’t think that I can defeat this new shift in language. One thing that humans have left to do, that we are still doing, is re-defining what it means to speak and who we are (perhaps unfortunately so). I, of course, deeply oppose the trend where generative AI gets to shape who we are becoming. But again, I am sure that I won’t win this battle. We already have “authors” who have never written whatever is in their own books. And this trend will likely continue so long as people profit from it.</p>

<p>Eytan Adar on Bluesky <a href="https://bsky.app/profile/eytan.adar.prof/post/3loegskycec2b">posted a provocation</a> to a post I made, based on that old blog “<a href="http://www.bernstein-plus-sons.com/RPDEQ.html">Real programmers don’t eat quiche</a>.” The idea here is that policing the tools and methods people use, taken to the logical extreme, is really just culturally gatekeeping a field of practice. The post is meant to challenge my critiques of LLM usage by researchers, where I asked, “what is the point?”</p>

<p>I have 2 things I’d like to say against the “Real programmers don’t eat quiche” critique in the context of LLMs and our identities as creators: First, there <em>does</em> come a point where <strong>words matter</strong>. A “writer” should perform the act of <em>writing</em> and a “programmer” should perform the act of <em>programming</em>. What it means to <em>program</em> changes based on the conditions, our tools, our outputs, etc. So does someone still program if they ask someone else, or something else, to program for them? Maybe. I’d argue that there comes a point where you aren’t a programmer anymore but a manager-to-the-programmer instead. My second point is that I, on socio-cultural grounds, actually <em>do</em> oppose the idea that a “writer” can claim the title if they use an LLM. I do want to gatekeep some indentity a bit. Even aside from definitions and the importance of words (or whatever), I cannot have a reasonable conversation about “writing” with a human “writer” who hasn’t written anything other than prompts to a machine. We are socially and culturally divided. What means “writer” to them has little worth connecting in terms of what it means to me. We don’t share techniques, thoughts, motivations, influences, histories, and experiences. Perhaps they can claim “writer” and I am some older thing now (a pre-historic writer, a dinosaur, of sorts). But whatever I am isn’t what they are.</p>

<blockquote>
  <p>As the immortal <a href="https://www.ursulakleguin.com/bvc-art-information-theft-and-confusion-part-two">Ursula Le Guin said</a>, the thieves, posers, and villains of writing are the ones who cannot “appreciate their predecessors and fellow-workers in the saltmines of literature.” Asking me to acknowledge a writer’s manager as a writer themselves is a social and cultural offense to the craft of writing. <em>The driver with the whip and the slave are not the same.</em></p>
</blockquote>

<p>I actually left industry once it became clear that my career path forward would likely involve management. “Individual contributors” are expensive and the role seemed high-risk to maintain as a specialist. But management has flexibility into other roles. This is because the identity of a practitioner with deep expertise is much different than one who dictates and delegates their agendas to others.</p>

<p>So yes, as anti-social as this stance is: I cannot accept that my creation and someone else’s (who uses LLMs) are even remotely related and worth speaking about using the same terminology.</p>

<p>In this definitional shift, “saying” used to mean that the thoughts, ideation, framing, motivations, editing, validation, expression, construction of language, and execution are performed by a human person who can be held responsible for all of the above and may take credit for all of the above. <em>Things are different now.</em></p>

<hr />

<p><br /></p>

<p>So perhaps my next thoughts rest on the ever-relevant critical questions: who benefits from these new identities? And how? What are the material conditions, realities, and systems that support and incentivize the new ways of being that generative AI and large language models have given birth to?</p>

<h2 id="what-is-the-economy-of-modern-artificial-intelligences">What is the economy of modern artificial intelligences?</h2>
<p>In a sense, there used to be a risk to writing. Writing and saying used to have an <em>economy</em> of give and take: you said things if it was worth it, to get away with it, to bring it into the world for your own benefit (or the benefit of others). We would speak to enact change, even if it had risk. This meant that we would sift and sort, the economy of writing, much like the economy of creation and craft, mattered to be done well. The cost of poor creation was wasted time, effort, and potential responsibility for damages caused or loss. Time, in particular, has always been one of the greatest filters of human production: we had to believe that creation would be worth our time. Responsibility, in tandem to time, was our primary method of refinement.</p>

<p>But now “saying” has drifted to mean that any one of the things that comprise the collective act of “saying” may or may not involve any human responsibility, time, or labor at all. Thoughts, ideation, framing, motivations, editing, validation, expression, construction of language, and execution may all be performed by machine agents. And because of that, we have lost responsibility. We have no more economy of creation. Writing something no longer has risk because it no longer requires anything of us.</p>

<p>The great selling point of generative AI, which is built on mountains and mountains of existing human creation, is that you no longer need to pay any price, save for 9.99 per month for a premium subscription, in order to create. Enough “hard” creation has already been done, now human labor can be re-created with ease. <em>Quite a tempting selling point!</em></p>

<p>Interestingly, we have managed to still maintain an ethos where we can take credit for “saying” things, despite possibly having said nothing of our own at all anymore and not being responsible for what we’ve said, either.</p>

<p>And this leads us to the reason that I believe generative AI may be the tool that fully erodes our honesty and trust in digital spaces. Crystal Lee’s research comes to mind: She has <a href="https://dl.acm.org/doi/abs/10.1145/3411764.3445211">an excellent piece on how viral visualizations were used to mislead and create disinformation</a>. What has happened in our modern age is that when we have economies without risk (which are, again, a fantastical proposition) and tools that enable those economies (such as ones that divorce humans from responsibility), we see that things like lying and destroying are now worth it.</p>

<p>Generative AI is, therefore, an enabler of this new fantasy economy, where machines “say” on our behalf and yet are capable of massive destruction. We pay nothing up front but a measly subscription fee. We have virtually no laws to regulate horribly evil acts like generative AI pornography of people without their consent, stealing the words and styling of writers and artists, producing books, research, data, and visualizations full of lies and falsehoods, eroding public and private trust in our existing infrastructures of knowledge, and so on.</p>

<h2 id="the-real-cost">The real cost</h2>

<p>And the worst “cost” of all that generative AI hides from us? The price of extraction being paid by our planet.</p>

<p>The cost to us right now seems low, but the price being paid is <em>very high</em>. It is an existential threat, in fact.</p>

<p>In my fantasy world, Braven, the big twist in the meta-narrative is that magic, which is accomplished by creating “portals” between Braven and infinity, is actually just creating a portal to the future of Braven. And eventually, the day when all past magic has called on the world arrives and immediately the surface of Braven is scorched to a crisp and the veins of the earth are necrotized. Most all humanity dies instantly, all magic ceases to function, and the great demons who plotted patiently from under the earth finally emerge to consume the flesh of every burnt corpse that remains.</p>

<p>And I wrote this over 20 years ago as the cornerstone event of my whole world. And I watch as my own prophecy is coming true right now: we are rapidly bringing an apocalypse (from an entirely avoidable future) closer and closer to the present day, all because we have the convenience of “magic” at our hands.</p>

<hr />

<p><br /></p>

<p>What’s left? Do we dismantle these systems?</p>

<p>Again, I’m realistic. People won’t change their behavior unless we write laws and outline reasonable policies. Even the infamous creature of disinformation-enablement Joe Rogan believes that, for example, <a href="https://www.youtube.com/shorts/N-wnLYBhrxY">contractors still need to have laws</a> or else all of our homes will fall apart.</p>

<p>People will cut corners over this next decade or two, politically, digitally, physically, and intellectually. And houses, metaphorical or literal, <em>will</em> fall apart. And maybe we will have enough forensic threads and powerful enough arms of justice to respond when our calamities come knocking.</p>

<p>But also, I’m a pessimist with technology. I think that both the oligarchs and the political right will rise to power because of disinformation (and well, will remain in power as they already are). They’ve been willing to leverage economies and infrastructures that are dishonest and even illegal, because they know they can get away with it. Generative AI is only incentivizing this existing trend.</p>

<p>Perhaps my own solution will be to create more physical things, by hand, for a while. That seems to be a haven, for now, where I can create without encroaching flies buzzing in my ear about how much faster and bigger I could make things if I was only willing to leverage an agent on my behalf.</p>

<p>And I’ll continue to write my own words, in the old way, and take responsibility for the things I’ve chosen to say because, for now, I still love myself enough to say and make my own things. We will see where this all goes in the coming years.</p>

<p>And maybe all of this is a motivation to leave academia, which is steeped in marriage to our new magics, and instead write my fiction, which is sadly becoming less and less fiction every day. If I’m lucky, I can get away with a stable job and still find the time to write. I feel like it is becoming more important than anything else now: to really <em>say</em> what should be said - about today, tomorrow, and worlds that we can only ever imagine existing, even if it costs me something to do so.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[If words are written, if sounds are uttered, was something still *said*? How do large-language models change who we are and what it means to *do* anything? And how does the overall economy of riskless LLM-use shift honesty, truth, our climate, and the future of the world?]]></summary></entry><entry><title type="html">Design for all: reading between the lines</title><link href="https://www.frank.computer/blog/2025/05/for-all.html" rel="alternate" type="text/html" title="Design for all: reading between the lines" /><published>2025-05-05T00:00:00+00:00</published><updated>2025-05-05T00:00:00+00:00</updated><id>https://www.frank.computer/blog/2025/05/for-all</id><content type="html" xml:base="https://www.frank.computer/blog/2025/05/for-all.html"><![CDATA[<p>So often “for everyone” in pitches related to people with disabilities is translated to mean “for everyone else (who doesn’t have a disability).” This is a subtle way that work for people with disabilities gets justified to corporations and individuals who are concerned with scale: well, this is for <em>everyone.</em> Sounds great, right? But let’s unpack why this can be trouble, too:</p>

<p>Just recently I had a chat with a researcher at a big company who kept insisting that I clarify how my work (on accessibility) applies “to everyone.” They were initially interested in some kind of partnership/collaboration, but it soon became clear to me that they didn’t want to do anything “just” for people with disabilities. What they really cared about was everyone <em>else</em> (people without disabilities). They wanted to hear that the work I was doing applied to people without disabilities too - they wanted to make sure my time wasn’t “wasted on a small scope of impact.” Yikes.</p>

<p>But I talk about this language and framing problem in my introduction to visualization and accessibility materials (which I often have in my talks): some work we do that is worth doing doesn’t fit that Inclusive Design mantra of “design for one, extend to all.” Sometimes good design is <em>just</em> design for a small scope of people. Sometimes good design for some, doesn’t extend to everyone.</p>

<p>Corporations, businesses, and people who want scale might not like that fact!</p>

<p>For example: tactile maps with braille. Braille maps don’t “extend.” Visually, they aren’t always easy to parse unless combined with additional graphics. And you need to be able to read braille. But for folks who are blind and know braille, they can be awesome. Almost nothing else compares in effectiveness to a braille/embossed graphic, in fact. They are <em>very good</em> design, but they don’t fit that Inclusive Design mantra of “extend to all.” Instead, say whatever you really mean.</p>

<figure>
    <img src="https://www.frank.computer/images/perkins.jpeg" alt="Tactile maps from left to right feature the streets of Boston, Great Britain and Wales, Lake Michigan and Surrounding States, the Arctic, the streets of Vienna, examples of physical features, the British Isles, and The United States." />
    <figcaption>Tactile maps, <a href="https://www.perkins.org/extensive-digitization-of-tactile-map-collection/">courtesy of Perkins School for the Blind</a></figcaption>
</figure>

<p>Don’t use “for all” or “for everyone” casually. It sounds so nice and inclusive! It can be tempting to overuse this, even sometimes to the point where we become afraid to do work that might not have “broad” impact or a “curb cut effect.” We need to be willing to justify our work in other ways, too.</p>

<p>So just remember: some things are more inclusive for one group of people and are paradoxically less inclusive for others. That’s okay to have! We should be willing to mature how we think about including people. Not every decision we make necessarily must be a broad decision that can also be made for everyone. (A post for another day will be about my push for “softerware” and building technologies that are intended to be softer/adaptable for different end use.)</p>

<h2 id="further-reading">Further Reading</h2>
<p>The immensely helpful <a href="https://www.disabledlist.org/">Disability List</a> by Liz Jackson has a resource called <a href="https://www.criticalaxis.org/">“Critical Axis”</a> which is a great system and organization for different types of problematic framing and rhetoric when it comes to disability. So if this piece seems new to you, I highly recommend you check out their stuff.</p>

<p>In particular they have a bit on <a href="https://www.criticalaxis.org/trope/for-all/">the trope of “for all”</a> as well as some rather humorous (and troubling) examples used in industry settings.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The phrase 'design for all' isn't quite as bad as 'all lives matter' but there is something a little tricky about this phrase when it comes to accessibility and disability. We should probably question what we really mean by this.]]></summary></entry></feed>