• AI for Defense

    My good friend Danny Goodman (and Co-Founder at Swarm Aero) recently wrote a great essay on how AI can help with America’s defense. He outlines 3 opportunities:

    • “Affordable mass”: Balancing/augmenting America’s historical strategy of pursuing only extremely expensive, long-lived “exquisite” assets (e.g. F-35’s, aircraft carriers) with autonomous and lower cost units which can safely increase sensor capability &, if it comes to it, serve as alternative targets to help safeguard human operators
    • Smarter war planning: Leveraging modeling & simulation to devise better tactics and strategies (think AlphaCraft on steroids)
    • Smarter procurement: Using AI to evaluate how programs and budget line items will actually impact America’s defensive capabilities to provide objectivity in budgeting

  • Why Childcare *is* Economic Infrastructure

    As a parent myself, few things throw off my work day as much as a wrench in my childcare — like a kid being sick and needing to come home or a school/childcare center being closed for the day. The time required to change plans while balancing work, the desire to check-in on your child throughout the work day to make sure they’re doing okay… and this is as someone with a fair amount of work flexibility, a spouse who also has flexibility, and nearby family who can pitch in.

    Childcare, while expensive, is a vital piece of the infrastructure that makes my and my spouse’s careers possible — and hence the (hopefully positive 😇) economic impact we have possible. It’s made me very sympathetic to the notion that we need to take childcare policy much more seriously — something that I think played out for millions of households when COVID disrupted schooling and childcare plans.

    The Washington Post’s Catherine Rampell also lays this out clearly in an Opinion piece, tracking how the closure of one Wisconsin day-care had cascading impacts on the affected parents and then their employers.

  • Good Windows on ARM at last?

    Silicon nerd 🤓 that I am, I have gone through multiple cycles of excited-then-disappointed for Windows-on-ARM, especially considering the success of ChromeOS with ARM, the Apple M1/M2 (Apple’s own ARM silicon which now powers its laptops), and AWS Graviton (Amazon’s own ARM chip for its cloud computing services).

    I may just be setting myself up for disappointment here but these (admittedly vendor-provided) specs for their new Snapdragon X (based on technology they acquired from Nuvia and are currently being sued for by ARM) look very impressive. Biased as they may be, the fact that these chips are performing in the same performance range as Intel/AMD/Apple’s silicon on single-threaded benchmarks (not to mention the multi-threaded applications which work well with the Snapdragon X’s 12 cores) hopefully bodes well for the state of CPU competition in the PC market!

    Single-threaded CPU performance (Config A is a high performance tuned offering, Config B is a “thin & light” configuration)
    Multi-threaded CPU performance (Config A is a high performance tuned offering, Config B is a “thin & light” configuration)

    Qualcomm Snapdragon X Elite Performance Preview: A First Look at What’s to Come
    Ryan Smith | Anandtech

  • Complex operations for gene editing therapies

    Gene editing makes possible new therapies and actual cures (not just treatments) that were previously not. But, one thing that doesn’t get discussed a great deal is how these new gene editing-based therapies throw the “take two and call me in the morning” model out the window.

    This interesting piece in BioPharmaDive gives a fascinating look at all the steps for a gene editing therapy for sickle cell disease that Vertex Pharmaceuticals is awaiting FDA approval for. The steps include:

    • referral by hematologist (not to mention insurance approval!)
    • collection of cells (probably via bone marrow extraction)
    • (partial) myeloablation of the patient
    • shipping the cells to a manufacturing facility
    • manufacturing facility applies gene editing on the cells
    • shipping of cells back
    • infusion of the gene edited cells to the patient (so they hopefully engraft back in their bone marrow)

    Each step is complicated and has their own set of risks. And, while there are many economic aspects of this that are similar to more traditional drug regimens (high price points, deep biological understanding of disease, complicated manufacturing [esp for biologicals], medical / insurance outreach, patient education, etc.), gene editing-based therapies (which can also include CAR-T therapy) now require a level of ongoing operational complexity that the biotech/pharmaceutical industries will need to adapt to if we want to bring these therapies to more people.

  • The World Runs on Excel… and its Mistakes

    The 2022 CHIPS and Science Act earmarked hundreds of billions in subsidies and tax credits to bolster a U.S. domestic semiconductor (and especially semiconductor manufacturing) industry. If it works, it will dramatically reposition the U.S. in the global semiconductor value chain (especially relative to China).

    With such large amounts of taxpayer money practically “gifted” to large (already very profitable) corporations like Intel, the U.S. taxpayer can reasonably assume that these funds should be allocated carefully and thoughtfully and with processes in place to make sure every penny furthered the U.S.’s strategic goals.

    But, when the world’s financial decisions are powered by Excel spreadsheets, even the best laid plans can go awry.

    The team behind the startup Rowsie created a large language model (LLM)-powered tool which can understand Excel spreadsheets and answer questions posed to it. They downloaded a spreadsheet that the US government provided as an example of the information and calculations they want applicants fill out in order to qualify. They then applied their AI tool to the spreadsheet to understand it’s structure and formulas.

    Interestingly, Rowsie was able to find a single-cell spreadsheet error (see images below) which resulted in a $178 million understatement of interest payments!

    The Assumptions Processing tab in the Example Pre-App-Simple-Financial-Model spreadsheet from the CHIPS Act funding application website. Notice row 50. Despite the section being about Subordinated Debt (see Cell B50), they’re using cell C51 from the Control Panel tab (which points to the Senior Debt rate of 5%) rather than the correct cell of D51 (which points to the Subordinated Debt rate of 8%).

    To be clear, this is not a criticism of the spreadsheet’s architects. In this case, what seems to have happened, is that the spreadsheet creator copied an earlier row (row 40) and forgot to edit the formula to account for the fact that row 50 is about subordinated debt and row 40 is about senior debt. It’s a familiar story to anyone who’s ever been tasked with doing something complicated in Excel. Features like copy and paste and complex formulas are very powerful, but also make it very easy for a small mistake to cascade. It’s also remarkably hard to catch!

    Hopefully the Department of Commerce catches on and fixes this little clerical mishap, and that applicants are submitting good spreadsheets, free of errors. But, this case underscores how (1) so many of the world’s financial and policy decisions rest on Excel spreadsheets and you just have to hope 🤞🏻 no large mistakes were made, and (2) the potential for tools like Rowsie to be tireless proofreaders and assistants who can help us avoid mistakes and understand those critical spreadsheets quickly.

    If you’re interested in checking out Rowsie, check it out at https://www.rowsie.ai/!

    DISCLAIMER: I happen to be friends with the founders of Rowsie which is how I found out about this

  • The Problem with the MCU

    Something is wrong with the state of the Marvel Cinematic Universe (MCU).

    In 2019, Disney/Marvel topped off an amazing decade-plus run of films with Avengers: Endgame, becoming (until Avatar was re-released in China) the highest grossing film of all time. This was in spite of an objectively complicated plot which required a deep understanding of all of Marvel Cinematic Universe continuity to follow.

    And yet critics and fans (myself included! 🙋🏻‍♂️) loved it! It seemed like Marvel could do no wrong.

    It doesn’t feel that way anymore. While I’ve personally enjoyed Black Panther: Wakanda Forever and Shang-Chi, this Time article does a good job of critiquing how complicated the MCU has become, so much so that a layperson can’t just watch one casually.

    But it misses one additional thing which I think gets to the heart of why the MCU just doesn’t feel right anymore. The MCU is now so commercially large, that the scripts feel like they’re written by a committee of businesspeople (oh make sure you’re setting up this other show/movie! let’s get in an action scene with some kind of viral quip!) rather than writers/directors trying to tell an entertaining story for the sake of the story.

    And until they get to that, I’m not sure even Marvel’s plans to cut down on the number of productions will deliver.

    How Marvel Lost Its Way
    Eliana Dockterman | Time

  • The Parents Trying to Pass Down a Language They Hardly Speak

    This article resonates with me on so many levels: both as the child who came to the US and saw his language skills deteriorate as he assimilated and as the parent trying to preserve his kids’ connection to their cultural heritage

  • Pixel’s Parade of AI

    I am a big Google Pixel fan, being an owner and user of multiple Google Pixel line products. As a result, I tuned in to the recent MadeByGoogle stream. While it was hard not to be impressed with the demonstrations of Google’s AI prowess, I couldn’t help but be a little baffled…

    What was the point of making everything AI-related?

    Given how low Pixel’s market share is in the smartphone market, you’d think the focus ought to be on explaining why “normies” should buy the phone or find the price tag compelling, but instead every feature had to tie back to AI in some way.

    Don’t get me wrong, AI is a compelling enabler of new technologies. Some of the call and photo functionalities are amazing, both as technological demonstrations but also in terms of pure utility for the user.

    But, every product person learns early that customers care less about how something gets done and more about whether the product does what they want it too. And, as someone who very much wants a meaningful rival to Apple and Samsung, I hope Google doesn’t forget that either.

  • Artificial Wombs

    Anyone who’s ever seen a NICU or statistics on the health outcomes for extremely premature babies understands how artificial womb technology could be incredibly impactful.

    At the same time, it poses some tricky ethical challenges:

    • How can you get informed, ethical consent to trial this? It’s nigh impossible to predict who will need a premature delivery.
    • In places like the US with limitations / controversy on reproductive rights, how do you push this forward without impacting assessments of “fetal viability” that may impact the abortion debate?

    Great piece in Nature News below 👇🏻

  • The “Large Vision Model” (LVM) Era is Upon Us

    Unless you’ve been under a rock, you’ll know the tech industry has been rocked by the rapid advance in performance by large language models (LLMs) such as ChatGPT. By adapting self-supervised learning methods, LLMs “learn” to sound like a human being by learning how to fill in gaps in language and, by doing so, become remarkably adept at solving not just language problems but understanding & creativity.

    Interestingly, the same is happening in imaging, as models largely trained to fill in “gaps” in images are becoming amazingly adept. A friend of mine, Pearse Keane’s group at University College of London, for instance, just published a model trained using self-supervised learning methods on ophthalmological images which is capable of not only diagnosing diabetic retinopathy and glaucoma relatively accurately, but relatively good at predicting cardiovascular events and Parkinson’s.

    At a talk, Andrew Ng captured it well, by pointing out the parallels between the advances in language modeling that happened after the seminal Transformer paper and what is happening in the “large vision model” world with this great illustration.

    From Andrew Ng (Image credit: EETimes)

  • I want your market and you to pay for it

    I have followed TSMC very closely since I started my career in the semiconductor industry. A brilliant combination of bold business bet (by founder Morris Chang), industry tailwinds (with the rise of fabless semiconductor model), forward-thinking from the Taiwanese government (who helped launch TSMC), and technological progress, it’s been fascinating to see the company enter the public consciousness.

    In hearing about TSMC’s investment in the very aptly-named ESMC (European Semiconductor Manufacturing Company), I can’t help but think this is another brilliant TSMC-esque play. TSMC gets:

    • Guarantee outsized market share in leading edge semiconductor technology in Europe
    • Paid for in part by some of their largest customers (Infineon, Bosch, and NXP) who will likely commit / guarantee some of their volumes to fill this new manufacturing facility
    • AND (likely) additional subsidies / policy support from the European Union government (who increasingly doesn’t want to be left out of advanced chip manufacturing given Asia’s current dominance and the US’s Inflation Reduction Act push)

    TSMC has managed to turn what could have been a disaster for them (growing nationalism in semiconductor manufacturing) into a subsidized, volume-committed factory.

  • The cable bundle of the future

    Charter and Disney recently made peace over the recent ESPN carriage fee dispute.

    Three things are happening in video delivery world that are colliding here:

    1. People are “cutting the cord” as they become less dependent on cable for high quality content (due to things like YouTube and Netflix)
    2. Because you’ve lost the “cable bundle” economics (where cable subscribers would cross-subsidize each other’s viewing — you pay because you really want ESPN & I pay because I really want HBO and, as a result, we both end up paying less for more content), video streaming services like Disney+ inevitably increase prices & introduce ad models to cover their (very high) cost (of content production). This naturally means new bundles will emerge as consumers look to find ways to pay less for more content.
    3. High speed internet today is largely subsidized by the investments from cable industry to deliver video. If ‘cord cutting’ (as in canceling cable) continues, then eventually the cost of high speed internet will go up as it becomes the “main event” for the company’s financials. Given (2), I think this likely means “cable companies” will increasingly become “bundled internet + streaming service” companies soon.

    All this is ironically not that different from the original cable bundle, only this time we have a few new logos (i.e. Netflix) and a little more price transparency since you can see what the unsubsidized streaming video service cost (i.e. Disney+, Hulu, etc.) would be outside of the bundle.

  • Yummy AND durable 🍅

    One reason much of the modern produce we buy tastes so bland is because our agricultural system has bred modern varieties for ship-ability and the ability of produce to be picked by machine, rather than flavor.

    While this has expanded the access to produce (both geographically but also in terms of cost due to the ability to use automation), it’s meant that consumers often have to choose between shelf life and good taste.

    But, advances in plant genetics could change that. If we could understand the genes that are responsible for durability, that could inform how we breed or gene-edit varieties that can combine desirable taste attributes with durability.

    Researchers in China published a paper in Nature Plants identifying the gene (fs8.1) responsible for making roma tomatoes elongated and crush-resistant enough to be machine-harvestable and even demonstrated that it would work in alternative varieties without changing their taste.

    Both the paper and the Science article on it are worth checking out.

  • UAW Strike vs. the Detroit Three EV Transition

    Bloomberg had a great article over the weekend about the events leading up to the (partial) UAW strike against the Detroit Three (GM, Ford, and Stellantis [fka Chrysler]).

    I’m not surprised (nor should anyone) by the strike. New UAW management, years-long grievances, and tight labor market favoring workers means a strike was almost certainly always going to be the first move by the UAW.

    The bigger question is what this will do to the Detroit Three’s electrification push. The UAW’s challenge here (as it has been since the 80s) is that the Detroit Three are in a weak and uncertain position as it relates to foreign auto makers and new EV giants like Tesla. While the Inflation Reduction Act may give US auto makers the US EV market, going into a technology transition with a large labor cost & agility disadvantage is a surefire way to (continue to) cede the much larger global market which, in the end, hurts all of the US auto industry (not to mention the Biden administration’s hopes that this creates new jobs and centers green manufacturing in the US).

    How Auto Executives Misread the UAW Ahead of Historic Strike
    David Welch, Keith Naughton, Gabrielle Coppola, and Josh Eidelson | Bloomberg News

  • Hopelessness in China’s Youth

    This article in the Economist paints a dismal picture of the state of life for the youth in China: youth unemployment so high the government has stopped reporting on it (as if that was going to change anything…), housing and childcare costs so high that young people have given up on having traditional families, a government and state-run media that actively scolds them for being soft and pampered, and the best and brightest fleeing to Singapore…

    How’s that “Chinese dream 中國夢” going?

  • Smart Home Manifesto

    This is an older piece (written in 2016) but remarkable in how well it resonates even 7 years later.

    Given that I have a home server and opinions on Matter/Thread, it shouldn’t be a surprise that I have many smart home gadgets in my house. And while I’ve made many purchasing and configuration choices in the spirit of this manifesto, it still boggles my mind that I still fall short of the seamless vision the writer (Paulus Schoutsen, founder of Home Assistant) lays out.

    portable speaker on brown wooden table
    Photo by Mati Mango on Pexels.com

    Perfect Home Automation
    Paulus Schoutsen | Home Assistant Blog

  • Psychedelics in the Clinic

    When I first heard about the use of psychedelics (like ketamine and psilocybin) for treatment of mental illness, I was skeptical. It just seemed too ripe for abuse.

    But, there is a growing body of credible academic work suggesting that psychedelics when dosed properly and used in conjunction with therapy / other drugs can be a gamechanger — especially for treatment-resistant depression and suicidality — and that is incredibly exciting.

    At the same time, as a former telemedicine startup operator, this makes me more alarmed by the numerous companies working to commercialize these. In the bid for venture-style growth, it’s all too easy to lose track of the “when dosed properly and used in conjunction with therapy / other drugs” part.

    In any event, this article from Medicine at Michigan is a good overview of the recent research highlights in the field and why so many clinicians and scientists are excited.

    Serious about psychedelics
    Katie Whitney | Medicine at Michigan

  • Hawaiian Electric having a PG&E Moment

    The fires in Maui have had a devastating human toll (111 dead, 1000 missing as of this writing). It is not surprising that it’s raising some questions about the role of Hawaii’s utility (Hawaiian Electric/HECO) played in the disaster.

    While it will take time to sort out the investigation and the class action lawsuit, it’s clear that investors and Hawaiian Electric management are scrambling, with the WSJ reporting that Hawaiian Electric is now talking to restructuring advisors to explore their next steps, in a crisis that very much parallels the series of wildfires that were ultimately blamed on Northern California utility PG&E and resulted in bankruptcy proceedings.

    Utilities now face three simultaneous problems (arguably of their own making):

    • climate change escalating the risks of catastrophic wildfires and storms
    • utilities across the country having aging energy infrastructure
    • homeownership patterns, disaster insurance coverage & premiums, and utility risk management plans built for a pre-climate-change risk environment

    The smart ones will be proactively overhauling their processes and infrastructure to cope with this. The less smart ones will potentially be dragged kicking and screaming into this world in much the same way that PG&E and Hawaiian Electric currently are.

  • Setting Up Pihole, Nginx Proxy, and Twingate with OpenMediaVault

    (Note: this is part of my ongoing series on cheaply selfhosting)

    I recently shared how I set up a (OpenMediaVault) home server on a cheap mini-PC. After posting it, I received a number of suggestions that inspired me to make a few additional tweaks to improve the security and usability of my server.

    Read more if you’re interested in setting up (on an OpenMediaVault v6 server):

    • Pihole, a “DNS filter” that blocks ads / trackers
    • using Pihole as a local DNS server to have custom web addresses for software services running on your network and Nginx to handle port forwarding
    • Twingate (a better alternative to opening up a port and setting up Dynamic DNS to grant secure access to your network)


    Pihole is a lightweight local DNS server (it gets its name from the Raspberry Pi, a <$100 device popular with hobbyists, that it can run fully on).

    A DNS (or Domain Name Server) converts human readable addresses (like www.google.com) into IP addresses (like As a result, every piece of internet-connected technology is routinely making DNS requests when using the internet. Internet service providers typically offer their own DNS servers for their customers. But, some technology vendors (like Google and CloudFlare) also offer their own DNS services with optimizations on speed, security, and privacy.

    A home-grown DNS server like Pihole can layer additional functionality on top:

    • DNS “filter” for ad / tracker blocking: Pihole can be configured to return dummy IP addresses for specific domains. This can be used to block online tracking or ads (by blocking the domains commonly associated with those activities). While not foolproof, one advantage this approach has over traditional ad blocking software is that, because this blocking happens at the network level, the blocking extends to all devices on the network (such as internet-connected gadgets, smart TVs, and smartphones) without needing to install any extra software.
    • DNS caching for performance improvements: In addition to the performance gains from blocking ads, Pihole also boosts performance by caching commonly requested domains, reducing the need to “go out to the internet” to find a particular IP address. While this won’t speed up a video stream or download, it will make content from frequently visited sites on your network load faster by skipping that internet lookup step.

    To install Pihole using Docker on OpenMediaVault:

    • If you haven’t already, make sure you have OMV Extras and Docker Compose installed (refer to the section Docker and OMV-Extras in my previous post) and have a static local IP address assigned to the server.
    • Login to your OpenMediaVault web admin panel, go to [Services > Compose > Files], and press the  button. Under Name put down Pihole and under File, adapt the following (making sure the number of spaces are consistent)
      version: "3"
      container_name: pihole
      image: pihole/pihole:latest
      - "53:53/tcp"
      - "53:53/udp"
      - "8000:80/tcp"
      TZ: 'America/Los_Angeles'
      WEBPASSWORD: '<Password for the web admin panel>'
      FTLCONF_LOCAL_IPV4: '<your server IP address>'
      - '<absolute path to shared config folder>/pihole:/etc/pihole'
      - '<absolute path to shared config folder>/dnsmasq.d:/etc/dnsmasq.d'
      restart: unless-stopped
      You’ll need to replace <Password for the web admin panel> with the password you’ll want to use to be access the Pihole web configuration interface, <your server IP address> with the static local IP address for your server, and <absolute path to shared config folder> with the absolute path to the config folder where you want Docker-installed applications to store their configuration information (accessible by going to [Storage > Shared Folders] in the administrative panel).

      I live in the Bay Area so I set timezone TZ to America/Los_AngelesYou can find yours here.

      Under Ports, I’ve kept the port 53 reservation (as this is the standard port for DNS requests) but I’ve chosen to map the Pihole administrative console to port 8000 (instead of the default of port 80 to avoid a conflict with the OpenMediaVault admin panel default). Note: This will prevent you from using Pihole’s default pi.hole domain as a way to get to the Pihole administrative console out-of-the-box. Because standard web traffic goes to port 80 (and this configuration has Pihole listening at port 8080), pi.hole would likely just direct you to the OpenMediaVault panel. While you could let pi.hole take over port 80, you would need to move OpenMediaVault’s admin panel to a different port (which itself has complexity). I ultimately opted with keeping OpenMediaVault at port 80 knowing that I could configure Pihole and Nginx proxy (see below) to redirect pi.hole to the right port.

      You’ll notice this configures two volumes, one for dnsmasq.d, which is the DNS service, and one for pihole which provides an easy way to configure dnsmasq.d and download blocklists.

      Note: the above instructions assume your home network, like most, is IPv4 only. If you have an IPv6 network, you will need to add an IPv6: True line under environment: and replace the FTLCONF_LOCAL_IPV4:'<server IPv4 address>' with FTLCONF_LOCAL_IPV6:'<server IPv6 address>'. For more information, see the official Pihole Docker instructions.

      Once you’re done, hit Save and you should be returned to your list of Docker compose files for the next step. Notice that the new Pihole entry you created has a Down status, showing the container has yet to be initiated.
    • Disabling systemd-resolved: Most modern Linux operating systems include a built-in DNS resolver that listens on port 53 called systemd-resolved. Prior to initiating the Pihole container, you’ll need to disable this to prevent that port conflict. Use WeTTy (refer to the section Docker and OMV-Extras in my previous post) or SSH to login as the root user to your OpenMediaVault command line. Enter the following command:
      nano /etc/systemd/resolved.conf
      Look for the line that says #DNSStubListener=yes and replace it with DNSStubListener=no, making sure to remove the # at the start of the line. (Hit Ctrl+X to exit, Y to save, and Enter to overwrite the file). This configuration will tell systemd-resolved to stop listening to port 53.

      To complete the configuration change, you’ll need to edit the symlink /etc/resolv.conf to point to the file you just edited by running:
      sh -c 'rm /etc/resolv.conf && ln -s /run/systemd/resolve/resolv.conf /etc/resolve.conf'
      Now all that remains is to restart systemd-resolved:
      systemctl restart systemd-resolved
    • How to start / update / stop / remove your Pihole container: You can manage all of your Docker Compose files by going to [Services > Compose > Files] in the OpenMediaVault admin panel. Click on the Pihole entry (which should turn it yellow) and press the  (up) button. This will create the container, download any files needed, and, if you properly disabled systemd-resolved in the last step, initiate Pihole.

      And that’s it! To prove it worked, go to your-server-ip:8000 in a browser and you should see the login for the Pihole admin webpage (see below).

      From time to time, you’ll want to update the container. OMV makes this very easy. Every time you press the  (pull) button in the [Services > Compose > Files] interface, Docker will pull the latest version (maintained by the Pihole team).

    Now that you have Pihole running, it is time to enable and configure it for your network.

    • Test Pihole from a computer: Before you change your network settings, it’s a good idea to make sure everything works.
      • On your computer, manually set your DNS service to your Pihole by putting in your server IP address as the address for your computer’s primary DNS server (Mac OS instructions; Windows instructions; Linux instructions). Be sure to leave any alternate / secondary addresses blank (many computers will issue DNS requests to every server they have on their list and if an alternative exists you may not end up blocking anything).
      • (Temporarily) disable any ad blocking service you may have on your computer / browser you want to test with (so that this is a good test of Pihole as opposed to your ad blocking software). Then try to go to https://consumerproductsusa.com/ — this is a URL that is blocked by default by Pihole. If you see a very spammy website promising rewards, either your Pihole does not work or you did not configure your DNS correctly.
      • Finally login to the Pihole configuration panel (your-server-ip:8000) using the password you set up during installation. From the dashboard click on the Queries Blocked box at the top (your colors may vary but it’s the red box on my panel, see below).

        On the next screen, you should see the domain consumerproductsusa.com next to the IP address of your computer, confirming that the address was blocked.

        You can now turn your ad blocking software back on!
      • You should now set the DNS service on your computer back to “automatic” or “DHCP” so that it will inherit its DNS settings from the network/router (and especially if this is a laptop that you may use on another network).
    • Configure DNS on router: Once you’ve confirmed that the Pihole service works, you should configure the default DNS settings on your router to make Pihole the DNS service for your entire network. The instructions for this will vary by router manufacturer. If you use Google Wifi as I do, here are the instructions.

      Once this is completed, every device which inherits DNS settings from the router will now be using Pihole for their DNS requests.

      Note: one downside of this approach is that the Pihole becomes a single point of failure for the entire network. If the Pihole crashes or fails, for any reason, none of your network’s DNS requests will go through until the router’s settings are changed or the Pihole becomes functional again. Pihole generally has good reliability so this is unlikely to be an issue most of the time, but I am currently using Google’s DNS as a fallback on my Google Wifi (for the times when something goes awry with my server) and I would also encourage you to know how to change the DNS settings for your router in case things go bad so that your access to the internet is not taken out unnecessarily.
    • Configure Pihole: To get the most out of Pihole’s ad blocking functionality, I would suggest three things
      • Select Good Upstream DNS Servers: From the Pihole administrative panel, click on Settings. Then select the DNS tab. Here, Pihole allows you to configure which external DNS services the DNS requests on your network should go to if they aren’t going to be blocked and haven’t yet been cached. I would recommend selecting the checkboxes next to Google and Cloudflare given their reputations for providing fast, secure, and high quality DNS services (and selecting multiple will provide redundancy).
      • Update Gravity periodically: Gravity is the system by which Pihole updates its list of domains to block. From the Pihole administrative panel, click on [Tools > Update Gravity] and click the Update button. If there are any updates to the blocklists you are using, these will be downloaded and “turned on”.
      • Configure Domains to block/allow: Pihole allows administrators to granularly customize the domains to block (blacklist) or allow (whitelist). From the Pihole administrative panel, click on Domains. Here, an admin can add a domain (or a regular expression for a family of domains) to the blacklist (if it’s not currently blocked) or the whitelist (if it currently is) to change what happens when a user on the network accesses the DNS.

        I added whitelist exclusions for link.axios.com to let me click through links from the Axios email newsletters I receive and www.googleadservices.com to let my wife click through Google-served ads. Pihole also makes it easy to manually take a domain that a device on your network has requested to block/allow. Tap on Total Queries from the Pihole dashboard, click on the IP address of the device making the request, and you’ll see every DNS request (including those which were blocked) with a link beside them to add to the domain whitelist or blacklist.

        Pihole will also allow admins to configure different rules for different sets of devices. This can be done by calling out clients (which can be done by clicking on Clients and picking their IP address / MAC address / hostnames), assigning them to groups (which can be defined by clicking on Groups), and then configuring domain rules to go with those groups (in Domains). Unfortunately because Google Wifi simply forwards DNS requests rather than distributes them, I can only do this for devices that are configured to directly point at the Pihole, but this could be an interesting way to impose parental internet controls.

    Now you have a Pihole network-level ad blocker and DNS cache!

    Local DNS and Nginx proxy

    As a local DNS server, Pihole can do more than just block ads. It also lets you create human readable addresses for services running on your network. In my case, I created one for the OpenMediaVault admin panel (omv.home), one for WeTTy (wetty.home), and one for Ubooquity (ubooquity.home).

    If your setup is like mine (all services use the same IP address but different ports), you will need to set up a proxy as DNS does not handle port forwarding. Luckily, OpenMediaVault has Nginx, a popular web server with a performant proxy, built-in. While many online tutorials suggest installing Nginx Proxy Manager, that felt like overkill, so I decided to configure Nginx directly.

    To get started:

    • Configure the A records for the domains you want in Pihole: Login to your Pihole administrative console (your-server-ip:8000) and click on [Local DNS > DNS Records] from the sidebar. Under the section called Add a new domain/IP combination, fill out the Domain: you want for a given service (like omv.home or wetty.home) and the IP Address: (if you’ve been following my guides, this will be your-server-ip). Press the Add button and it will show up below. Repeat for all the domains you want. If you have a setup similar to mine, you will see many domains pointed at the same IP address (because the different services are simply different ports on my server).

      To test if these work, enter any of the domains you just put in to a browser and it should take you to the login page for the OpenMediaVault admin panel (as currently they are just pointing at your server IP address).

      Note 1: while you can generally use whatever domains you want, it is suggested that you don’t use a TLD that could conflict with an actual website (i.e. .com) or that are commonly used by networking systems (i.e. .local or .lan). This is why I used .home for all of my domains (the IETF has a list they recommend, although it includes .lan which I would advise against as some routers such as Google Wifi use this)

      Note 2: Pihole itself automatically tries to forward pi.hole to its web admin panel, so you don’t need to configure that domain. The next step (configuring proxy port forwarding) will allow pi.hole to work.
    • Edit the Nginx proxy configuration: Pihole’s Local DNS server will send users looking for one of the domains you set up (i.e. wetty.home) to the IP address you configured. Now you need your server to forward that request to the appropriate port to get to the right service.

      You can do this by taking advantage of the fact that Nginx, by default, will load any .conf file in the /etc/nginx/conf.d/ directory as a proxy configuration. Pick any file name you want (I went with dothome.conf because all of my service domains end with .home) and after using WeTTy or SSH to login as root, run:
      nano /etc/nginx/conf.d/<your file name>.conf
      The first time you run this, it will open up a blank file. Nginx looks at the information in this file for how to redirect incoming requests. What we’ll want to do is tell Nginx that when a request comes in for a particular domain (i.e. ubooquity.home or pi.hole) that request should be sent to a particular IP address and port.

      Manually writing these configuration files can be a little daunting and, truth be told, the text file I share below is the result of a lot of trial and error, but in general there are 2 types of proxy commands that are relevant for making your domain setup work.

      One is a proxy_pass where Nginx will basically take any traffic to a given domain and just pass it along (sometimes with additional configuration headers). I use this below for wetty.home, pi.hole, ubooquityadmin.home, and ubooquity.home. It worked without the need to pass any additional headers for WeTTy and Ubooquity, but for pi.hole, I had to set several additional proxy headers (which I learned from this post on Reddit).

      The other is a 301 redirect where you tell the client to simply forward itself to another location. I use this for ubooquityadmin.home because the actual URL you need to reach is not / but /admin/ and the 301 makes it easy to setup an automatic forward. I then use the regex match ~ /(.*)$ to make sure every other URL is proxy_pass‘d to the appropriate domain and port.

      You’ll notice I did not include the domain I configured for my OpenMediaVault console (omv.home). That is because omv.home already goes to the right place without needing any proxy to port forward.
      server {
      listen 80;
      server_name pi.hole;
      location / {
      proxy_pass http://<your-server-ip>:8000;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $host;
      proxy_set_header X-ForwardedFor $proxy_add_x_forwarded_for;
      proxy_hide_header X-Frame-Options;
      proxy_set_header X-Frame-Options "SAMEORIGIN";
      proxy_read_timeout 90;
      server {
      listen 80;
      server_name wetty.home;
      location / {
      proxy_pass http://<your-server-ip>:2222;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $host;
      proxy_set_header X-ForwardedFor $proxy_add_x_forwarded_for;
      server {
      listen 80;
      server_name ubooquity.home;
      location / {
      proxy_pass http://<your-server-ip>:2202;
      server {
      listen 80;
      server_name ubooquityadmin.home;
      location =/ {
      return 301 http://ubooquityadmin.home/admin;
      location ~ /(.*)$ {
      proxy_pass http://<your-server-ip>:2203/$1;
      If you are using other domains, ports, or IP addresses, adjust accordingly. Be sure all your curly braces have their mates ({}) and that each line ends with a semicolon (;) or Nginx will crash. I use Tab‘s between statements (i.e. between listen and 80) to format them more nicely but Nginx will accept any number or type of whitespace.

      To test if your new configuration worked, save your changes (hit Ctrl+X to exit, Y to save, and Enter to overwrite the file if you are editing a pre-edited one). In the command line, run the following command to restart Nginx with your new configuration loaded.
      systemctl restart nginx
      Try to login to your OpenMediaVault administrative panel in a browser. If that works, it means Nginx is up and running and you at least didn’t make any obvious syntax errors!

      Next try to access one of the domains you just configured (for instance pi.hole) to test if the proxy was configured correctly.

      If either of those steps failed, use WeTTy or SSH to log back in to the command line and use the command above to edit the file (you can delete everything if you want to start fresh) and rerun the restart command after you’ve made changes to see if that fixes it. It may take a little bit of doing if you have a tricky configuration but once you’re set, everyone on the network can now use your configured addresses to access the services on your network.


    In my previous post, I set up Dynamic DNS and a Wireguard VPN to grant secure access to the network from external devices (i.e. a work computer, my smartphone when I’m out, etc.). While it worked, the approach had two flaws:

    1. The work required to set up each device for Wireguard is quite involved (you have to configure it on the VPN server and then pass credentials to the device via QR code or file)
    2. It requires me to open up a port on my router for external traffic (a security risk) and maintain a Dynamic DNS setup that is vulnerable to multiple points of failure and could make changing domain providers difficult.

    A friend of mine, after reading my post, suggested I look into Twingate instead. Twingate offers several advantages, including:

    • Simple graphical configuration of which resources should be made available to which devices
    • Easier to use client software with secure (but still easy to use) authentication
    • No need to configure Dynamic DNS or open a port
    • Support for local DNS rules (i.e. the domains I configured in Pihole)

    I was intrigued (it didn’t hurt that Twingate has a generous free Starter plan that should work for most home server setups). To set up Twingate to enable remote access:

    • Create a Twingate account and Network: Go to their signup page and create an account. You will then be asked to set up a unique Network name. The resulting address, <yournetworkname>.twingate.com, will be your Network configuration page from where you can configure remote access.
    • Add a Remote Network: Click the Add button on the right-hand-side of the screen. Select On Premise for Location and enter any name you choose (I went with Home network).
    • Add Resources: Select the Remote Network you just created (if you haven’t already) and use the Add Resource button to add an individual domain name or IP address and then grant access to a group of users (by default, it will go to everyone).

      With my configuration, I added 5 domains (pi.hole + the four .home domains I configured through Pihole) and 1 IP address (for the server, to handle the ubooquityadmin.home forwarding and in case there was ever a need to access an additional service on my server that I had not yet created a domain for).
    • Install Connector Docker Container: To make the selected network resources available through Twingate requires installing a Twingate Connector to something internet-connected on the network.

      Press the Deploy Connector button on one of the connectors on the right-hand-side of the Remote Network page (mine is called flying-mongrel). Select Docker in Step 1 to get Docker instructions (see below). Then press the Generate Tokens button under Step 2 to create the tokens that you’ll need to link your Connector to your Twingate network and resources.

      With the Access Token and Refresh Token saved, you are ready to configure Docker to install. Login to the OpenMediaVault administrative panel and go to [Services > Compose > Files] and press the  button. Under Name put down Twingate Connector and under File, enter the following (making sure the number of spaces are consistent)
      container_name: twingate_connector
      restart: unless-stopped
      image: "twingate/connector:latest"
      - SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
      - TWINGATE_API_ENDPOINT=/connector.stock
      - TWINGATE_NETWORK=<your network name>
      - TWINGATE_ACCESS_TOKEN=<your connector access token>
      - TWINGATE_REFRESH_TOKEN=<your connector refresh token>
      You’ll need to replace <your network name> with the name of the Twingate network you created, <your connector access token> and <your connector refresh token> with the access token and refresh token generated from the Twingate website. Do not add any single or double quotation marks around the network name or the tokens as they will result in a failed authentication with Twingate (as I was forced to learn through experience).

      Once you’re done, hit Save and you should be returned to your list of Docker compose files. Click on the entry for Twingate Connector you just created and then press the  (up) button to initialize the container.

      Go back to your Twingate network page and select the Remote Network your Connector is associated with. If you were successful, within a few moments, the Connector’s status will reflect this (see below for the before and after).

      If, after a few minutes there is still no change, you should check the container logs. This can be done by going to [Services > Compose > Services] in the OpenMediaVault administrative panel. Select the Twingate Connector container and press the (logs) button in the menubar. The TWINGATE_LOG_LEVEL=7 setting in the Docker configuration file sets the Twingate Connector to report all activities in great detail and should give you (or a helpful participant on the Twingate forum) a hint as to what went wrong.
    • Add Users and Install Clients: Once the configuration is done and the Connector is set up, all that remains is to add user accounts and install the Twingate client software on the devices that should be able to access the network resources.

      Users can be added (or removed) by going to your Twingate network page and clicking on the Team link in the menu bar. You can Add User (via email) or otherwise customize Group policies. Be mindful of the Twingate Starter plan limit to 5 users…

      As for the devices, the client software can be found at https://get.twingate.com/. Once installed, to access the network, the user will simply need to authenticate.
    • Remove my old VPN / Dynamic DNS setup. This is not strictly necessary, but if you followed my instructions from before, you can now undo those by:
      • Closing the port you opened from your Router configuration
      • Disabling Dynamic DNS setup from your domain provider
      • “Down”-ing and deleting the container and configuration file for DDClient (you can do this by going to [Services > Compose > Files] from OpenMediaVault admin panel)
      • Deleting the configured Wireguard clients and tunnels (you can do this by going to [Services > Wireguard] from the OpenMediaVault admin panel) and then disabling the Wireguard plugin (go to [System > Plugins])
      • Removing the Wireguard client from my devices

    And there you have it! A secure means of accessing your network while retaining your local DNS settings and avoiding the pitfalls of Dynamic DNS and opening a port.


    There were a number of resources that were very helpful in configuring the above. I’m listing them below in case they are helpful:

    (If you’re interested in how to setup a home server on OpenMediaVault or how to self-host different services, check out all my posts on the subject)

  • Why Thread is Matter’s biggest problem right now

    Stop me if you’ve heard this one before… Adoption of a technology is being impeded by too many standards. The solution? A new standard, of course, and before you know it, we now have another new standard to deal with.

    The smart home industry needs to figure out how to properly embrace Thread (and Matter). It (or something like it) will be necessary for broader smart home / Internet of Things adoption.

    Why Thread is Matter’s biggest problem right now
    Jennifer Pattison Tuohy | The Verge