Mmojo.net

Human-first Generative AI.


Author: Brad Hutchings

  • Engineered Software vs. Flat-Pack Code

    Engineered Software vs. Flat-Pack Code

    #MeWriting My favorite YouTube channels are about woodworking and construction. A few of my favorites in the genre are Bourbon Moth Woodworking, ENCurtis, Make Something, Shop Nation, 731 Woodworks, and Stud Pack. I’ve always been handy with tools and ambitious with hobbies. The ongoing appeal of these channels for me is the story telling. They are entertaining!

    In November, 2021, inspired by YouTube channels Brad Angove and Texas Toast Guitars (TTG), I participated in a one-week guitar building workshop at TTG. I built and painted this:

    The Silly Mo.

    It took 3rd place in TTG’s prestigious Great American Guitar Build-Off in June, 2022.

    The guitar below was built by Brad Angove. I refined his work with days of detailed neck sanding and a perfect set-up for playability.

    Not me playing Brad’s axe.

    It took 4th place in the same contest.

    I’ve had an opportunity to build out a nice garage woodshop with a drill press, jointer, planer, benches, hand tools, and a commercial quality CNC. With those great tools, my pinnacle creations were small cutting boards with clever epoxy decoration. My guitar designs with CNC help never materialized enough.

    Another thing I’ve done in my now extensive adult lifetime is assemble a metric f@$% tonne (MFT) of IKEA and off-brand flat-pack furniture. In my 20s, it was affordable. In my 30s, it was functional. In my 40s, it was still everywhere.

    Flat-pack TV stand.

    I’ve been privileged to acquire (and eventually pass on) some nice real wood furniture constructed in classic styles with classic methods. I like nice things. I would love to get back into woodworking and making nice things with modern tools and methods. Classic furniture is nice things. Flat-pack, not so nice.

    Side note on flat pack: If you want to make fun of my version of obsessive compulsive disorder (OCD), show me your badly assembled flat-pack furniture. It hurts my heart that (a) you did that, and (b) you tolerate it. And yeah, I feel compelled to fix your mess. It will cost you a burger and a Dr. Pepper.


    Plot twist! This article isn’t about classically constructed furniture versus flat-pack furniture. It’s about software engineering versus vibe coding. Software engineering is about creating classic software from required, durable raw materials. Vibe coding is about assembling software from pieces that have already been written and gathered by a large language model (LLM) in the cloud.

    A competent, accomplished software engineer can itemize reasons to be cautious with vibe coding. A vibe coder can claim he’s doing software engineering for 1/10 the cost. Managers and marketers equating these two activities are not serious about software.

    Speaking of clowns… Dario Amodei, CEO of Anthropic, stated for third or fourth time at Davos this week that AI would be better than all humans at coding in the next 6 to 12 months. Here’s a link. I’m not embedding the video or finding the exact clip because it’s just dumb. If there is ever a Nuremberg style trial for what these clowns have done to our industry, I will gladly bring a (an?) MFT of rope. Smdh.


    I’m not trying to start a war. I brought the fine woodworking vs. flat-pack metaphor into the discussion as an analogue to software development versus assembly to propose a framework for peace.

    Vibe coders aren’t writing software that requires five decades of academic research and a similar length of time of commercial practice — with gross missteps along the way — to work reliably. They’re assembling software that might not have to work well. I’ve previously described this as “building prototypes”. I am 100% supportive of that activity and any methods to do it so long as people are willing to call them prototypes and throw them away when real systems need to be built!

    We can have a world where there is software built by traditional, time tested methods — and where there is flat-pack vibe code built by AI, allegedly under human guidance. The peace deal is that you, the manager or customer, pick the one you want and choose practitioners who want to make it for you. Traditional software engineers don’t want to assemble flat-pack. Vibe coders can’t write real software. That’s just how things are.

    Can’t we all just get along?


    I floated a draft of this to my small collection of reliable critics. One response came back quickly: “Tech people don’t want to read your boring woodworking metaphor or hear about what you do when you’re not coding.” Exactly. Thank you for validating my premise!


    I would appreciate your reactions and comments on my LinkedIn repost.

  • Inference @ Home

    Inference @ Home

    #MeWriting Americans will soon face a choice: Get off the electric grid or do their inference at home. Let me define and explain.

    The All-in Podcast for Friday, January 16, 2025 floated the first option. In order to free up electricity for data centers, homeowners in the United States would install solar panels and batteries over the next decade. The All-In option would cost each detached homeowner about $30K for a company to add solar and battery to the home’s roof. Perhaps we’ll see do-it-yourself or handyman kits appear at retailers like Lowes and Home Depot in the $10K range.

    Please watch the full segment — about 17 minutes. Fans of the podcast and the “besties” will appreciate this for what it is: a trial balloon from investors, industry, and government. All four were pitching the approach, ignoring obvious pitfalls:

    1. Multi-family and high-density buildings. Two thirds of United States households live in detached homes. So, one third will have the coordination problem of who installs solar and batteries.
    2. Suboptimal rooflines. Call south facing roofs the 100% efficiency baseline. East/west facing roofs operate from 75%-90%. North facing roofs operate from 45%-70%. Solar efficiency was not a consideration in building orientation for most existing homes.
    3. Maintenance and repair. When local equipment or transmission wire needs to be replaced, the replacement costs is spread over many rate-payers. When the panel or battery on your home goes bad, replacement cost is spread over you, or a warranty. There’s likely no routing around the problem to keep service available, unless you’ve invested a multiple of base system cost for local redundancy.
    4. Regulation. Most states do not currently have regulation favorable to single family homeowners buying from and selling to the grid. Many homeowners who would have been able to afford to buy systems outright have ended up in more expensive leasing situations solely for regulatory compliance associated with not being completely off-grid. With telephone deregulation in the early 1980s, we solved this problem by letting customers plug whatever compliant equipment they wanted into the phone network. We are almost 50 years behind solving this basic problem with the electric grid. Few people are discussing it.

    What Problem are We Solving?

    Let’s stop a moment and appreciate that there is a problem that needs to be solved. The artificial intelligence segment of the tech industry wants to build-out inference capacity in new data centers. Inference capacity is the ability to ask “AI” questions and get responses. This is usually in the form of chat or so-called “agentic” workflows. The bigger the models and the more users using them, the more compute (CPU or GPU), memory (RAM and disk), and power needed to provide the service. Let’s leave out diffusion, which is used for images, sounds, and video. Let’s also set aside network bandwidth concerns. For text inference, it’s negligible.

    The states have already entered the discussion on resource allocation. Florida, led by its staunchly conservative Governor, Ron DeSantis, is saying no to land use, environmental impacts, and grid prioritization for new AI data centers. Politically, this should surprise everybody and simultaneously, surprise nobody. DeSantis is specifically questioning the need for centralized inference, even poo-pooing the “don’t let the Chinese beat us” narrative driving data center buildout.

    The recently departed Scott Adams was both the creator of the Dilbert comic strip and a recent popularizer of the persuasion lens. Through that lens, we can see that dramatically boosting capacity or radically reallocating usage of the electric grid is an example of selling past the sale. The real sale is inference capacity scaled beyond imagination. We are not talking about whether that is needed. Spoiler alert: it’s not. We are talking, instead, about how to provide enough power to do it.


    Are We Solving the Right Problem?

    I told you that inference scaled out by data centers is not needed. For two years now, I have helped my clients and customers use small large language models (LLMs) running on their laptops or inexpensive appliances for chat. I coined a phrase that these models feel just as knowledgeable, but less annoyingly loquacious than large, popular cloud models. They’re not quite so fast either. They generate answers a little faster than you can read them rather than spitting out a page of text in an instant.

    A common response to my message should be quite flattering to me: “Brad, stick to comedy.” I make this whimsical because it is absurd. When I’ve dug deeply into real people’s embrace of cloud chat, I have found that the illusion of intelligence is very important to them. It’s easy to believe that some giant machine in the cloud is “intelligent”. It is not easy to believe that an appliance computer the size of a deck of cards is “intelligent”. Both provide similarly useful answers for let’s call it 19/20 questions they’ll ask. But they want the illusion of intelligence provided by a far-away computer they will never see. That illusion you crave might cost you $30K in installation this next decade and a lifetime of maintenance headaches and worry. See the All-In Podcast trial balloon above.

    Inference is not just chat. My software, Mmojo Server, provides an OpenAI compatible application program interface (API). This makes it possible for developers working on AI applications to use a private, local Mmojo Server rather than a cloud system as the AI backend to their products. One big advantage during the development phase is that they don’t pay a cloud provider for tokens. They pay for availability and capacity of a Mmojo Server. They might pay a cloud provider tens of thousands of dollars during development for what they can run for free on their laptops or package into a fast, local stand-alone server for under $2K. Developers can also eliminate a problem called “drift” — where the model changes — using a fixed, local LLM instead of the cloud. Mmojo Server has developers from companies you’ve heard of using it for both AI wrapper and agentic application development. It’s not theory or vision. It’s real.


    Alternative Approach

    I have a better idea than convincing all United States homeowners to spend $30K on solar and battery. What if, instead, homeowners spend $300 or $3000 on inference at home? The hardware is inexpensive and reliable. The software already exists. The application protocols are well defined and in use. My own system, the Mmojo Knowledge Appliance, is plug and play with zero configuration. Plug it into the wall for power and your router for connectivity. It is instantly available for use by any computer or device on your home network. Should it break, order another one and plug it in, just like any other small appliance in your home.

    Mmojo Knowledge Appliance

    I’ve built these for paying customers with inexpensive Raspberry Pi devices. If your tastes for inference tend more to race car performance, I can build you one using, for example, a Framework Desktop computer with an AMD Ryzen AI+ CPU/GPU.

    Side note: That appliance at the left consumes about $4 per month if run full throttle 24/7 on grid electricity priced at California regulated peak consumer rates. A typical heavy user might spend $0.50/month at those billing rates.

    Over the past two decades, the biggest reason that software has moved to the cloud is monetization. Tech companies can put a meter on your usage of software and force you to pay. There are secondary “benefits” like no installation or required maintenance and upgrades by technically challenged users. Presented with the costs of going “all-in” on cloud inference, maybe we should reconsider the appropriateness of that model.

    I have a working name for this approach: Inference @ Home. If this approach interests you, please message me on LinkedIn or drop me an email. I have several ways you can participate in this mission, ranging from using the Mmojo Server software to sponsoring my work. Let’s talk! -Brad


    I would appreciate your reactions and comments on my LinkedIn repost.

  • Knowing What AI is Good For is a Super Power

    Knowing What AI is Good For is a Super Power

    #MeWriting I stumbled on a LinkedIn post from a connection, Emmanuel Maggiori (link), today. The post is short enough to quote in full here:

    There’s a large market for “good enough” work (as opposed to excellent work). This includes for example, writing SEO-driven articles or designing banner images for blogs. High quality and thoroughness don’t matter in those cases. This is the work that will be most affected by AI, as people will use AI instead of hiring humans to do it.

    This is very similar to my own view that generative AI is good for generative things and blank page filling.

    It’s easy for AI enthusiasts to dismiss what Mr. Maggiori or I are suggesting are good uses of AI as “not worth anyone’s attention” or “minor use cases”. They are worth attention and they’re not minor! Imagine the greater AI industry wants to pave over the entire United States. I think that gets the scale of their ambition right. Blank page filling use cases, comparably, want to pave over California, Oregon, Washington, and Rhode Island for good measure. These use cases are very ambitious, with plenty of good work to go around for anyone who wants to attack them! You don’t have to pretend LLMs can think to keep busy.

    Meanwhile, 95% of AI projects are failing. They’re failing because they are too ambitious. They’re pretending that “AI” is intelligent, thus ignoring the “effective” side of the coin.


    The AI Super Power works like this:

    1. You know what tasks generative AI is actually good at.
    2. You have the chill to not endorse tasks it’s not good at.

    I know plenty of people on the “critic” side who claim that AI isn’t capable of doing the things it’s good at, or who rake me over the coals for wanting to use it for tasks humans will not even do because the tasks are too low value. I can’t help you if you can’t recognize the value of processes you can replicate with 100% success at home.

    I have a very good friend of almost 40 years. He will probably end up reading this some time. Two years ago, when I was trying to explain what AI is good for, I came up with a one-word description: “delight”. It’s consistent with what I’ve settled on. My friend then started coming to me with tasks that AI is not good for, claiming they would be delightful to him.

    I won’t call that intellectual dishonest or purposely ignoring my point, because I’m kind. I will call it not having requisite chill. This wasn’t two old friends bantering over beers trying to solve the world’s problems. We were trying to find a good business idea for AI. As of now, we still try. If we ever settled on an idea that didn’t meet the chill requirement, I would waste a lot of time and he would lose a lot of money. Funny enough, it would also be my fault when we fail.

    I do not know a lot of people who actually have this super power. Prior to reading and commenting on Mr. Maggiori’s post, then writing this article, I hadn’t framed it this directly. I have felt like I was on Brad Island with a strange set of beliefs that are difficult to share with, let alone inculcate into others. I know that it takes around 6 months with non-technical folks I engage deeply with on AI topics to move them into the vicinity of Brad Island. People’s natural priors of trust in technology and faith in “artificial intelligence” are hard eggs to crack.

    If you accept what I claim about this being a super power, I can help you and your company. I don’t just arrive with a vision. I have actual software that will help you understand AI and develop that same super power. Message me on LinkedIn if you’re interested.


    I would appreciate your reactions and comments on my LinkedIn repost.

  • Efficiency is One Side of the Coin

    Efficiency is One Side of the Coin

    #MeWriting Efficiency is the overriding goal for humans producing content today. AI is the perfect tool to drive it:

    1. You, a human, provide an input to the AI.

    2. The AI generates an output.

    3. You paste that output into place.

    4. You are efficient.

    Notice that you, a human, are not only in the loop. You are in control of the loop. But most importantly, you are efficient, because you let the AI do the bulk of the work. Presumably, Step 2 will become less expensive and take even less time in the future, as hardware, AI, and task strategies improve.

    AI developers can make you even more efficient by automating Step 1 and Step 3. You’re still in control though. Nobody who doesn’t embrace AI can match your productivity divided by time spent, a.k.a. efficiency. You have quickly mastered the efficiency side of the coin.


    Wait, there is another side? Yes. The content your efficient process produces will be used by someone. That use is the Effectiveness Side of the coin.

    To get your content considered on the Effectiveness Side, it has to grab someone’s attention. And then, to get it used as you intended, it has to be interesting. Notice what you are not optimizing for these on the Efficiency Side.

    Effectiveness is currently a second-class citizen to efficiency for many people and within many organizations, especially those that are all-in on AI. Effectiveness is often hard to measure, with clicks and likes being crude but available proxies. Courage to prioritize effectiveness is hard to measure, as well.

    To get actionable telemetry on the Effectiveness Side, you need to speak to your intended readers (or viewers). You need to ask them if they looked at your content and how closely they played attention. You need to listen for signs that they were or were not engaged with it, regardless of the word content of their answers. You need to accept a qualitative measure of the Effectiveness Side, because any numerical measurement will lie to you. How do you measure what’s inside your readers’ heads?


    My recommendation to you is to prioritize effectiveness. Once you have found a good measure of effectiveness and your content meets or exceeds your goals, you can work on being efficient. Nobody cares how efficiently your content is produced if they don’t know about it to begin with.

    People tend to be surprisingly effective content creators. AI and automation tend to be surprisingly ineffective. If you are relying on AI to be effective or more effective, the AI should pass two tests: (1) It should actually be effective, and (2) it should not rely on magic to be effective. You should be able to explain, from first principles of how AI actually works, how application of AI to creation of your content will be effective.


    I’m tagging this with a new tag, “Branch Elbonian”, as a tribute to Scott Adams’ lifetime work on persuasion. I’ll explain the tag another day when I have more examples to draw from.


    I would appreciate your reactions and comments on my LinkedIn repost.

  • On the Passing of Scott Adams

    On the Passing of Scott Adams

    #MeWriting Dilbert cartoonist Scott Adams passed away this morning at the age of 68. Reading the New York Post obituary by his biographer, Joel Pollak, I learned something important about “AI”.


    I first encountered Scott in the summer of 1993. I had completed my first year of grad school studying theoretical computer science at UC Irvine, and somehow convinced my Dad and his bosses at Pacific Bell that they should hire me to create a sales tool for T1 lines and Advanced Digital Network (ADN) inspired by graph modeling. I worked in my Dad’s private office at the Bishop Ranch office park and had a work Mac IIfx and my personal PowerBook 170 at my disposal. Yeah, I could program in C++ all day long, but developed the tool in HyperCard. I got paid really well for this work, and it made a huge impact on a sales team.

    My Dad and I were walking across the parking lot one afternoon, and I noticed a light blue or silver Datsun Z with the license plate DOGBERT. I asked my Dad if he knew what that was about, and he told me there was this up-and-coming cartoonist who worked in the next pod over. I became a regular daily reader that night. It was amazing to me that Scott/Dilbert was possible, if not tolerated. But it was also amazing to me that I could have everyone I worked with in stitches with a meeting / “meating” joke, as crude as the joke was. There was just something very wrong with office culture. Scott was the reporter on scene who made a career of getting away with it. BTW, I know exactly who visually inspired the Pointy Haired Boss. Absolute dead ringer for the guy. Not my Dad, LOL. I’ll take that knowledge to the grave though.

    Despite working in close enough proximity, I never actually met Scott. It didn’t dawn on me that that would be a better thing to do, than say, grabbing coffee with one of the sales guys who was using my tools before they were ready and telling stories to each other. No real work ever got done after lunch. Sorry, not sorry.


    Fast forward to 1996, when Scott released his first book, The Dilbert Principle. It was finally okay to say what had been the quiet part out loud. Big business culture had been captured by bullshit. I’d been witness to the progression watching the crap my Dad endured during the 1980s. Let’s revisit “Quality” and “Leadership Development” of that era sometime. Not now.

    At the same time people were discovering that big business was big bullshit, the Internet and entrepreneurship made it possible for a whole generation of smart kids to avoid it. Even the dot-com era startups avoided that bullshit. There was, for almost 15 years, a window of effectiveness working for or with small firms. And Dilbert was, for people at these firms, a popular reminder of how good they had it.

    While Scott continued to draw the Dilbert strip, he had pivoted to religion, philosophy, life advice, persuasion, and even politics as his book themes: God’s Debris, The Religion War, Stick to Drawing Comics, Monkey Brain!


    In 2015, Scott was the first prominent person to point out how good Donald Trump was at persuasion. Like Trump or not, he was, at that point, quite entertaining and hilarious, mostly for how the “serious people” reacted to him and how he did not care. But Scott connected the dots to persuasion and invoked Cialdini to make the case. It was quite a deep pull at the time. Here’s what I’ll say about Cialdini and persuasion… His groundbreaking book, Influence: Science and Practice, was assigned reading in an “Honors” political science breadth course I took as a Computer Science major as UC Irvine. In my discussion section for the course, comprised mostly of artists, poets, and poly sci types, I was the only one who actually read the book and the only one fascinated by it. In 1991, I knew it was special. I’ve recommended it to every person I’ve worked closely with in my career. Turns out…

    Popularizing persuasion as both explanative and practical will be Scott’s most important achievement. It eclipses the Dilbert cartoon. It probably even eclipses his mantra of being helpful. I think Scott appealed to helpful people. I don’t think he changed anyone’s mind or behavior on being helpful. That seems hardwired (or not) to me, with most of the error toward sycophancy rather than opposition.


    For 32-1/2 years of my adult life, I’ve been at least a weekly consumer and often a daily consumer of the content Scott produced. It’s much like most of my time, I’ve been a consumer of coffee. It’s not an obsession. It’s not drop everything. It’s more comfortable routine that never disappoints. I don’t always know what I’m going to get, but I know that sometimes, it’s going to be really damned interesting!

    The past three years, coincident with both his “cancellation” and the popularization of LLM chatbots, Scott has more than dabbled with chatbottery. He has wanted it to work and to be intelligent, and has been routinely disappointed. I’ve replied to too many of his posts — in the spirit of being helpful — but never broke through the noise. There is a line in Joel Pollak’s obituary this morning that made it all make sense to me:

    Adams used what he called the “persuasion filter”: Rather than judging whether political rhetoric was true or false, he simply evaluated it based on whether it was persuasive.

    I’ve listened to hours of Scott talking about LLMs, and he never stated explicitly that he was using the same filter for them as for politicians like Trump. He would acknowledge that they gave very confident sounding answers. Every new tool promised to do something really amazing! Enough so, that he tried many of them, and he tried many use scenarios, like narrating his books in “his” voice, etc. Right up to a couple weeks ago, he was working on a process to be applied after he was gone.

    Speaking of his voice… Anyone remember when Scott couldn’t talk for three years, from 2015 to 2018? I had totally forgotten, and I had a much longer period when my voice would cut out randomly. Turns out I was fat, and losing 70+ pounds took care of that for me. Scott had Botox, surgeries, and brain retraining to fix his. Scott was many things, but he was never fat.

    Anyway, as I spend the next year or so remembering Scott Adams things that gently nudged my iceberg for three decades, I’ll say here that the one that might end up being the most impactful is the connection I just made between the obituary pull quote and his fascination — despite continual and predictable disappointment — with AI.

    Thank you, Scott Adams! You were more than helpful.


    I would appreciate your reactions and comments on my LinkedIn repost.

  • Everything I Needed to Know About LLMs…

    Everything I Needed to Know About LLMs…

    …I Learned in High School Geometry.

    #MeWriting People of a certain age learned the most important skill for understanding how large language models (LLMs) actually work back in high school. No, we did not learn about neural networks in auto shop. Side note: I had to endure almost a half hour of one on one “academic counseling” to be allowed to enroll in auto shop back in 1987. Low brow class, thought by a credentialed adult to be a waste of my talent. Let your favorite LLM complete that hilarious story.

    I’m talking about Geometry class. There are two intellectual skills that are generally taught in high school geometry: reasoning and construction. Reasoning is how to prove a hypothesis, one intellectually sound step at a time. In geometry, we called them “proofs”. Construction is taking a limited set of tools and operations and, one intellectually sound step at a time, inventing more complicated operations. In Geometry class, we started with:

    • A flat piece of paper that can be marked.
    • A pencil for marking.
    • A straight edge for marking straight lines.
    • A compass for drawing arcs of a set radius centered at a point on the paper.

    Proof and construction in high school geometry are intellectual exercises — training for our high school brains. In today’s world or the world I entered as an adult, they didn’t have a lot of practical application. My Dad, who was as close as anyone has been to being a professional applied mathematician, never had to trisect an arbitrary angle with only a straight edge and compass. And if he had been required to do that, he would have known that isn’t possible, and I’m confident he could prove it!

    I’d like you to take at least 10 minutes to watch some (or better, all) of this video reviewing basic constructions you probably covered in your high school geometry course.

    Does any of that seem vaguely familiar to you? The world doesn’t need you to know the specifics to function as an adult. Nor does your job, in all likelihood. But you will be a more effective adult if you know that many of the complicated systems we construct and that you use daily are built from a few very simple principles, with a simple set of rules applied. You should realize that some things can’t be done with some systems of tools and rules. This is an important concept!


    It turns out that LLMs are just like this. Here is how every LLM works:

    • Large set of numerical weights, which define relationships among sets of nearby tokens.
    • Context window — the ordered set of existing tokens you’re working with.
    • Way to choose one next best enough random token using the weights and context window as inputs.

    The completion algorithm simply runs that operation in the third bullet until it encounters a special “I’m done” token.

    Everything else you think you see the LLM doing is just repeated application of these two items and one step. Here is an (old) video of me showing how chat is an illusion:

    If you didn’t watch, I showed how chat is just running the completion algorithm until it’s the user’s turn to type something.


    If your team has a failed AI project underway, here is how I would lead it to success:

    1. Your team watches the original The Karate Kid (1984) movie.
    2. We will discuss “wax on, wax off”. This is how actual work gets done. More importantly, it’s how we as humans internalize the work that gets done.
    3. We will all watch and replicate the Geometry Constructions video from above. Each team member will perform all 15 constructions using straight-edge and compass. This is a one-day project. We will frame some and decorate the office.
    4. Every use of an LLM in the project will be constructed. We will find some constructions that can’t be done or don’t make sense.
    5. We will focus on the ones that work and make sense, so we can call them wins.

    I would appreciate your reactions and comments on my LinkedIn repost.

  • It’s Going to Snow!

    It’s Going to Snow!

    #MeWriting I moved to Minden, NV in mid-September. Minden is east of Lake Tahoe and down the hill at an elevation of about 5000 feet. I didn’t plan this, but circumstances sent me here and different circumstances kept me here. I was a Southern California kid originally and made South Orange County my home for 37 years since coming down from the San Francisco Bay Area for college. You can probably imagine what my horror might be seeing this forecast on my phone today. And you’re at least half right!

    The cold is not good to me. I get a touch of Raynaud’s Syndrome when the ambient temperature is below 65℉-ish. For me, it’s blue then white fingers above the knuckles and a search for warm running water to thaw them out. I’m quicker than most with mittens and gloves. I know people who get it much worse, so I don’t need your sympathy. Weight loss has helped out a bunch, but it’s still a day to day concern here more than it was in The (South) OC.

    Let me tell you why you’re almost half wrong, though. This is the third time since getting here that I’ve seen 4 days in the next week with a snow forecast. In reality, we got an inch of snow on the ground Christmas morning, and it was gone in a couple hours. Check in with me on Friday, January 9th, and I’m sure I’ll have a similar recap.

    My parents live across town. They’ve been here almost 24 years. I already know from visiting through the years that nobody gets the weather predictions right. That’s not what I want fixed here. I really just want it to snow.

    The point of this is that there is a weird glitch in the matrix right now. I wonder if and how you have stumbled on it.


    I would appreciate your reactions and comments on my LinkedIn repost.

  • 2026: Fear of Messing Up (Video)

    2026: Fear of Messing Up (Video)

    #MeWriting In 2026, it will be a good plan to check if your AI idea is messed up. Because in 2023-2025, 95% of them turned out to be. The reason projects fail is because the use of generative algorithms is not consistent with what those algorithms do. I’ve been consistent on that for over 2 years posting here. I’ve lived it and I have felt its pain.

    If you need help (pardon me for using this word) “aligning” your business, projects, or ideas with what generative algorithms actually do, please reach out. I can get filthy rich and you can avoid a ridiculously costly mistake. It’s win, win. 95% of your clearly don’t get what these algorithms do, and I think the other 5% are lying, but that’s just a hunch. 🤣

    Music credit: me.

    I would appreciate your reactions and comments on my LinkedIn post.

  • Slop: Mis-valuing the Temporary

    Slop: Mis-valuing the Temporary

    #MeWriting One of my favorite YouTube channels is StudPack. It’s an ongoing story of a father, his son, and his son-in-law building a dream house for his son, and doing interesting side projects along the way.

    In recent Christmas holiday videos, they’ve been working to fix big problems in a friend’s home. The friend’s family is facing long-term financial challenges. Basic home maintenance and finishing home improvement projects got derailed at some point.

    One big task they took on was fixing a support beam in the kitchen ceiling. To do that, they had to frame two temporary 12-ish foot load-bearing walls on both sides of the beam. These took time. These took dimensional lumber. These took planning. These were an important part of the process of fixing the beam. Here is the episode where they document this work:

    Why am I fascinated with this? Because “in the trades”, as the kids say, temporary artifacts are necessary to allow big things to take shape. In building the son’s dream house, they are continually erecting and dismantling scaffolding. If you watch carefully, you’ll find esoteric examples of temporary in everything they do.

    We rarely build temporary things with non-physical mind work. It is a labor of discipline bordering on stubbornness to get anyone to throw away a draft or a prototype. We see that approach as inefficient. Don’t get me started about removing unused features and interface clutter!


    Into our crowded archive of every thought and attempt at thinking come algorithms that can generate new expressions of thoughts and changes to expressions quickly and inexpensively. We’re not at all inclined to value what they produce as temporary, so it necessarily becomes clutter and slop. We don’t have the discipline to under-value or correctly value, so we naturally over-value.

    Put another way: We have these new generative AI tools that can generate content that will move our own thinking a few feet. We insist on maximizing the value of everything the tools generate. Instead of things to consider, temporary support to formulate bigger ideas, scaffolding for accessing out of reach places, we treat them as fully formed solutions. In over-valuing them, we get slop.

    And another way: Filling a blank page doesn’t have to be permanent. It should be something we come back and replace. We should dedicate the time and attention to do so.


    I would appreciate your reactions and comments on my LinkedIn repost.