Author: Ben

  • Dexcom non-prescription glucose monitor approved

    Cheap and accurate continuous glucose monitoring is a bit of a holy grail for consumer metabolic health as it allows people to understand how their diet and exercise impact their blood sugar levels, which can vary from person to person.

    It’s also a holy grail for diabetes care as making sure blood sugar levels are neither too high nor too low is critical for health (too low and you can pass out or risk seizure or coma; too high and you risk diabetic neuropathy, kidney disease, and cardiovascular problems). For Type I diabetics and severe Type II diabetics, it’s also vital for dosing insulin.

    Because insulin dosing needs to be done just right, I was always under the impression that one of two things would happen along the way to producing a cheap continuous glucose monitor, either:

    1. The FDA would be hesitant to approve a device that wasn’t highly accurate to avoid the risk of a consumer using the reading to mis-dose insulin OR
    2. The device makers (like Dexcom) would be hesitant to create an accurate enough glucose monitor that it might cannibalize their highly profitable prescription glucose monitoring business

    As a result, I was pleasantly surprised that Dexcom’s over-the-counter Stelo continuous glucose monitor was approved by the FDA. It remains to be seen what the price will be and what level of information the Stelo will share with the customer, but I view this as a positive development and (at least for now) tip my hat to both the FDA and Dexcom here.

    (Thanks to Erin Brodwin from Axios for sharing the news on X)


  • “Corporate” Design

    Read an introspective piece by famed ex-Frog Design leader Robert Fabricant about the state of the design industry and the unease that he says many of his peers are feeling. While I disagree with some of the concerns he lays out around AI / diversity being the drivers of this unease, he makes a strong case for how this is a natural pendulum swing after years of seeing “Chief Design Officers” and design innovation groups added to many corporate giants.

    I’ve had the privilege of working with very strong designers. This has helped me appreciate the value of design thinking as something that goes far beyond “making things pretty” and believe, wholeheartedly, that it’s something that should be more broadly adopted.

    At the same time, it’s also not a surprise to me that during a time of layoffs and cost cutting, a design function which has become a little “spoiled” in the past years and of which calculating financial returns is experiencing some painful transition especially for creative-minded designers who struggle with that ROI evolution.

    If Phase 1 was getting companies to recognize that design thinking is needed, Phase 2 will be the space learning how to measure, communicate, and optimize what the value of a team of seasoned designers brings to the bottom line.


  • Costco Love

    Nice piece in the Economist about how Costco’s model of operational simplicity leads to a unique position in modern retail: beloved by customers, investors, AND workers:

    • sell fewer things ➡️
    • get better prices from suppliers & less inventory needed ➡️
    • lower costs for customers ➡️
    • more customers & more willing to pay recurring membership fee ➡️
    • strong, recurring profits ➡️
    • ability to pay well and promote from within 📈💪🏻

    Why Costco is so loved
    The Economist

  • How packaging tech is changing how we build & design chips

    Once upon a time, the hottest thing in chip design was “system-on-a-chip” (SOC). The idea is that you’d get the best cost and performance out of a chip by combining more parts into one piece of silicon. This would result in smaller area (less silicon = less cost) and faster performance (closer parts = faster communication) and resulted in more and more chips integrating more and more things.

    While the laws of physics haven’t reversed any of the above, the cost of designing chips that integrate more and more components has gone up sharply. Worse, different types of parts (like on-chip memory and physical/analog componentry) don’t scale down as well as pure logic transistors, making it very difficult to design chips that combine all these pieces.

    The rise of new types of packaging technologies, like Intel’s Foveros, Intel’s EMIB, TSMC’s InFO, new ways of separating power delivery from data delivery (backside power delivery), and more, has also made it so that you can more tightly integrate different pieces of silicon and improve their performance and size/cost.

    The result is now that many of the most advanced silicon today is built as packages of chiplets rather than as massive SOC projects: a change that has happened over a fairly short period of time.

    This interview with IMEC (a semiconductor industry research center)’s head of logic technologies breaks this out…


    What is CMOS 2.0?
    Samuel K. Moore | IEEE Spectrum

  • Store all the things: clean electricity means thermal energy storage boom

    Thermal energy storage has been a difficult place for climatetech in years past. The low cost of fossil fuels (the source for vast majority of high temperature industrial heat to date) and the failure of large scale solar thermal power plants to compete with the rapidly scaling solar photovoltaic industry made thermal storage feel like, at best, a market reserved for niche applications with unique fossil fuel price dynamics. This is despite some incredibly cool (dad-joke intended 🔥🥵🤓) technological ingenuity in the space.

    But, in a classic case of how cheap universal inputs change market dynamics, the plummeting cost and soaring availability of renewable electricity and the growing desire for industrial companies to get “clean” sources of industrial heat has resulted in almost a renaissance for the space as this Canary Media article (with a very nice table of thermal energy startups) points out.

    With cheap renewables (especially if the price varies), companies can buy electricity at low (sometimes near-zero if in the middle of a sunny and windy day) prices, convert that to high-temperature heat with an electric furnace, and store it for use later.

    While the devil’s in the details, in particular the round trip energy efficiency (how much energy you can get out versus what you put in), the delivered heat temperature range and rate (how hot and how much power), and, of course, the cost of the system, technologies like this could represent a key technology to green sectors of the economy that would otherwise be extremely difficult to lower carbon output for.


  • The IE6 YouTube conspiracy

    An oldie but a goodie — the story of how the YouTube team, post-Google acquisition, put up a “we won’t support Internet Explorer 6 in the future” message without any permission from anyone. (HT: Eric S)


    A Conspiracy to Kill IE6
    Chris Zacharias

  • Using your ear to control devices

    Very cool that we’re still finding new things we can control that can be applied to making the lives of people better.


  • Intel’s focus on chip packaging technology

    Intel has been interested in entering the foundry (semiconductor contract manufacturing) space for a long time. For years, Intel proudly boasted of being at the forefront of semiconductor technology — being first to market with the FinFET and smaller and smaller process geometries.

    So it’s interesting how, with the exception of the RibbonFET (the successor to the FinFET), almost all of Intel’s manufacturing technology announcements (see whitepaper) in it’s whitepaper to appeal to prospective foundry customers, all of it’s announcements pertain to packaging / “back end” technologies.

    I think it’s both a recognition that they are no longer the furthest ahead in that race, as well as recognition that Moore’s Law scaling has diminishing returns for many applications. Now, a major cost and performance driver is technology that was once considered easily outsourced to low cost assemblers in Asia is now front and center.


    A Peek at Intel’s Future Foundry Tech
    Samuel K. Moore | IEEE Spectrum

  • Iovance brings cell therapy to solid tumors

    Immune cell therapy — the use of modified immune cells directly to control cancer and autoimmune disease — has shown incredible results in liquid tumors (cancers of the blood and bone marrow like lymphoma, leukemia, etc), but has stumbled in addressing solid tumors.

    Iovance, which recently had its drug lifileucel approved by the FDA to treat advanced melanoma, has demonstrated an interesting spin on the cellular path which may prove to be effective in solid tumors. They extract Tumor-Infiltrating Lymphocytes (TILs), immune cells that are already “trying” to attack a solid tumor directly. Iovance then treats those TILs with their own proprietary process to expand the number of those cells and “further activate” them (to resist a tumor’s efforts to inactivate immune cells that may come after them) before reintroducing them to the patient.

    This is logistically very challenging (not dissimilar to what patients awaiting other cell therapies or Vertex’s new sickle cell treatment need to go through) as it also requires chemotherapy for lymphocyte depletion in the patient prior to reintroduction of the activated TILs. But, the upshot is that you now have an expanded population of cells known to be predisposed to attacking a solid tumor that can now resist the tumor’s immune suppression efforts.

    And, they’ve presented some impressive 4-year followup data on a study of advanced melanoma in patients who have already failed immune checkpoint inhibitor therapy, enough to convince the FDA of their effectiveness!

    To me, the beauty of this method is that it can work across tumor types. Iovance’s process (from what I’ve gleamed from their posters & presentations) works by getting more and more activated immune cells. Because they’re derived from the patient, these cells are already predisposed to attack the particular molecular targets of their tumor.

    This is contrast to most other immune cell therapy approaches (like CAR-T) where the process is inherently target-specific (i.e. get cells that go after this particular marker on this particular tumor) and each new target / tumor requires R&D work to validate. Couple this with the fact that TILs are already the body’s first line of defense against solid tumors and you may have an interesting platform for immune cell therapy in solid tumors.

    The devil’s in the details and requires more clinical study on more cancer types, but suffice to say, I think this is incredibly exciting!


  • Another Italian merchant invention: the decimal point!

    I’ve always been astonished by how many things we use now came from Renaissance Italian merchants: the @ sign, double-entry bookkeeping, banking, maritime insurance, and now the decimal point


  • Wind and solar closing on fossil fuels in EU power generation

    This one chart (published in Canary Media) illustrates both the case for optimism for our ability to deal with climate change as well as a clear case of how geopolitical pressures can dramatically impact energy choices: the rapid increase in use of renewable energy (mainly at the expense of fossil fuels) as source of electricity in the EU.


  • Don’t Pay for Adobe Acrobat to do Basic PDF Things

    (Note: this is part of my ongoing series on cheaply selfhosting)

    If you’re like me, every few months you have to do something with PDFs:

    • Merge them
    • Rotate them
    • Crop them
    • Add / remove a password
    • Move pages around / remove pages
    • Sign them
    • Add text / annotations to them

    This ends up either being a pain to do (via some combination of screen shots, printing, scanning, and exporting) or oddly expensive (buying a license to Adobe Acrobat or another pay-PDF manipulation tool).

    Enter Stirling PDF tools, a set of free web-based PDF manipulation tools which can also be selfhosted on any server supporting Docker. Given my selfhosting journey these past couple of months, this seemed like a perfect project to take on.

    In the hopes that this helps anyone who has ever had to do some PDF manipulation work done, I will share how I set up Stirling PDF tools (on my OpenMediaVault v6 home server)

    Stirling PDF

    Stirling tools started as a ChatGPT project which has since turned into an open source project with millions of Docker pulls. It handles everything through a simple web interface and on the server (no calls to any remote service). Depending on the version you install, you can also get access to tools converting common Office files to PDF and OCR (optical character recognition, where software can recognize text — even handwriting — in images).

    And, best of all, it’s free! (As in beer and as in freedom!)

    Installation

    To install the Stirling Tools on OpenMediaVault:

    • If you haven’t already, make sure you have OMV Extras and Docker Compose installed (refer to the section Docker and OMV-Extras in my previous post, you’ll want to follow all 10 steps as I refer to different parts of the process throughout this post) and have a static local IP address assigned to your server.
    • Login to your OpenMediaVault web admin panel, and then go to [Services > Compose > Files] in the sidebar. Press the button in the main interface to add a new Docker compose file.

      Under Name put down Stirling and under File, adapt the following (making sure the number of spaces are consistent)
      version: "3.3"
      services:
      stirling-pdf:
      image: frooodle/s-pdf:latest
      ports:
      - <unused port number like 7331>:8080
      environment:
      - DOCKER_ENABLE_SECURITY=false
      volumes:
      - '<absolute path to shared config folder>/tesseract:/usr/share/tessdata'
      - '<absolute path to shared config folder>/Stirling/configs:/config'
      - '<absolute path to shared config folder>/Stirling/customFiles:/customFiles'
      - '<absolute path to shared config folder>/Stirling/logs:/logs'
      restart: unless-stopped
      Under ports:, make sure to add an unused port number (I went with 7331).

      Replace <absolute path to shared config folder> with the absolute path to the config folder where you want Docker-installed applications to store their configuration information (accessible by going to [Storage > Shared Folders] in the administrative panel). You’ll notice there’s an extra line in there for tessdata — this corresponds to the stored files for the Tesseract tool that Stirling uses for OCR

      Once you’re done, hit Save and you should be returned to your list of Docker compose files for the next step. Notice that the new FreshRSS entry you created has a Down status, showing the container has yet to be initialized.
    • To start your Stirling container, click on the new Stirling entry and press the (up) button. This will create the container, download any files needed, and run it.

      And that’s it! To prove it worked, go to your-servers-static-ip-address:7331 from a browser that’s on the same network as your server (replacing 7331 if you picked a different port in the configuration above) and you should see the Stirling tools page (see below)
    • You can skip this step if you didn’t (as I laid out in my last post) set up Pihole and local DNS / Nginx proxy or if you don’t care about having a user-readable domain name for these PDF tools. But, assuming you do and you followed my instructions, open up WeTTy (which you can do by going to wetty.home in your browser if you followed my instructions or by going to [Services > WeTTY] from OpenMediaVault administrative panel and pressing Open UI button in the main panel) and login as the root user. Run:
      cd /etc/nginx/conf.d
      ls
      Pick out the file you created before for your domains and run
      nano <your file name>.conf
      This opens up the text editor nano with the file you just listed. Use your cursor to go to the very bottom of the file and add the following lines (making sure to use tabs and end each line with a semicolon)
      server {
      listen 80;
      server_name <pdf.home or the domain you'd like to use>;
      location / {
      proxy_pass http://<your-server-static-ip>:<PDF port number>;
      }
      }
      And then hit Ctrl+X to exit, Y to save, and Enter to overwrite the existing file. Then in the command line run the following to restart Nginx with your new configuration loaded.
      systemctl restart nginx
      Now, if your server sees a request for pdf.home (or whichever domain you picked), it will direct them to the PDF tools.

      Login to your Pihole administrative console (you can just go to pi.hole in a browser) and click on [Local DNS > DNS Records] from the sidebar. Under the section called Add a new domain/IP combination, fill out under Domain: the domain you just added above (i.e. pdf.home) and next to IP Address: you should add your server’s static IP address. Press the Add button and it will show up below.

      To make sure it all works, enter the domain you just added (pdf.home if you went with my default) in a browser and you should see the Stirling PDF tools page.
    • Lastly, to make the PDF tools actually useable, you’ll want to increase the maximum allowable file upload size in OpenMediaVault’s default webserver Nginx (so that you can use the tools with PDFs larger than the incredibly tiny default minimum size of 1 MB). To do this, log back into your server using WeTTy (follow the instructions above) and run:
      cd /etc/nginx/
      nano nginx.conf
      This opens up the text editor nano with the master configuration file for Nginx. Use your cursor to go to some spot after http { but before the closing }. This configures how Nginx will process HTTP requests (basically anything coming from a website). Enter the two lines below (making sure to use tabs and end the second line with a semicolon; to be clear "... stuff that comes by default..." is just placeholder text that you don’t need to write or add, it’s just to show that the two lines you enter need to be inside the {})
      http {
      ... stuff that comes by default ...
      ## adding larger file upload limit
      client_max_body_size 100M;
      ... more stuff that comes by default ...
      }
      And then hit Ctrl+X to exit, Y to save, and Enter to overwrite the existing file. Then in the command line run the following to restart Nginx with your new configuration loaded.
      systemctl restart nginx
      Now, the PDF tools can handle file uploads up to 100 MB in size!
    • Lastly, to make full use of OCR, you’ll want to download the language files you’re most interested in from Tesseract repository (the slower but more accurate files are here and the faster but less accurate files are here; simple click on the file you’re interested in from the list and then select Download from the “three dot” menu or by hitting Ctrl+Shift+s) and place them in the /tesseract folder you mapped in the Docker compose file. To verify that those files are properly loaded, simply go to the PDF tools, select the one called OCR / Cleanup scans (or visit <URL to PDF tools>/ocr-pdf) and the language files that you’ve downloaded should show up as a checkbox.

    And now, you have a handy set of PDF tools in your (home server) back pocket!

    (If you’re interested in how to setup a home server on OpenMediaVault or how to self-host different services, check out all my posts on the subject)

  • The Opportunity in Lagging Edge Semiconductors

    While much attention is (rightly) focused on the role of TSMC (and its rivals Samsung and Intel) in “leading edge” semiconductor technology, the opportunity at the so-called “lagging edge” — older semiconductor process technologies which continue to be used — is oftentimes completely ignored.

    The reality of the foundry model is that fab capacity is expensive to build and so the bulk of the profit made on a given process technology investment is when it’s years old. This is a natural consequence of three things:

    1. Very few semiconductor designers have the R&D budget or the need to be early adopters of the most advanced technologies. (That is primarily relegated to the sexiest advanced CPUs, FPGAs, and GPUs, but ignores the huge bulk of the rest of the semiconductor market)
    2. Because only a small handful of foundries can supply “leading edge” technologies and because new technologies have a “yield ramp” (where the technology goes from low yield to higher as the foundry gets more experience), new process technologies are meaningfully more expensive.
    3. Some products have extremely long lives and need to be supported for decade-plus (i.e. automotive, industrial, and military immediately come to mind)

    As a result, it was very rational for GlobalFoundries (formerly AMD’s in-house fab) to abandon producing advanced semiconductor technologies in 2018 to focus on building a profitable business at the lagging edge. Foundries like UMC and SMIC have largely made the same choice.

    This means giving up on some opportunities (those that require newer technologies) — as GlobalFoundries is finding recently in areas like communications and data center — but provided you have the service capability and capacity, can still lead to not only a profitable outcome, but one which is still incredibly important to the increasingly strategic semiconductor space.


  • NVIDIA to make custom AI chips? Tale as old as time

    Every standard products company (like NVIDIA) eventually gets lured by the prospect of gaining large volumes and high margins of a custom products business.

    And every custom products business wishes they could get into standard products to cut their dependency on a small handful of customers and pursue larger volumes.

    Given the above and the fact that NVIDIA did used to effectively build custom products (i.e. for game consoles and for some of its dedicated autonomous vehicle and media streamer projects) and the efforts by cloud vendors like Amazon and Microsoft to build their own Artificial Intelligence silicon it shouldn’t be a surprise to anyone that they’re pursuing this.

    Or that they may eventually leave this market behind as well.


  • Which jobs are the most [insert gender or race]?

    Fascinating data from the BLS on which jobs have the greatest share of a particular gender or race. The following two charts are from the WSJ article I linked. I never would have guessed that speech-language pathologists (women), property appraisers (white), postal service workers (black), or medical scientists (Asian) would have such a preponderance of a particular group.


  • Trouble in commercial real estate

    Commercial real estate (and, by extension, community banks) are in a world of hurt as hybrid/remote work, higher interest rates, and property bubbles deflating/popping collide…


    The Brutal Reality of Plunging Office Values Is Here
    Natalie Wong & Patrick Clark | Bloomberg

  • Selfhosting FreshRSS

    (Note: this is part of my ongoing series on cheaply selfhosting)

    It’s been a few months since I started down the selfhosting/home server journey. Thanks to Docker, it has been relatively smooth sailing. Today, I have a cheap mini-PC based server that:

    • blocks ads / online trackers on all devices
    • stores and streams media (even for when I’m out of the house)
    • acts as network storage (for our devices to store and share files)
    • serves as a personal RSS/newsreader

    The last one is new since my last post and, in the hopes that this helps others exploring what they can selfhost or who maybe have a home server and want to start deploying services, I wanted to share how I set up FreshRSS, a self-hosted RSS reader (on an OpenMediaVault v6 server)

    Why a RSS Reader?

    Like many who used it, I was a massive Google Reader fan. Until 2013 when it was unceremoniously shut down, it was probably the most important website I used after Gmail.

    I experimented with other RSS clients over the years, but found that I did not like most commercial web-based clients (which were focused on serving ads or promoting feeds I was uninterested in) or desktop clients (which were difficult to sync between devices). So, I switched to other alternatives (i.e. Twitter) for a number of years.

    FreshRSS

    Wanting to return to the simpler days where I could simply follow the content I was interested in, I stumbled on the idea of self-hosting an RSS reader. Looking at the awesome-selfhosted feed reader category, I looked at the different options and chose to go with FreshRSS for a few reasons:

    Installation

    To install FreshRSS on OpenMediaVault:

    • If you haven’t already, make sure you have OMV Extras and Docker Compose installed (refer to the section Docker and OMV-Extras in my previous post, you’ll want to follow all 10 steps as I refer to different parts of the process throughout this post) and have a static local IP address assigned to your server.
    • Login to your OpenMediaVault web admin panel, and then go to [Services > Compose > Files] in the sidebar. Press the button in the main interface to add a new Docker compose file.

      Under Name put down FreshRSS and under File, adapt the following (making sure the number of spaces are consistent)
      version: "2.1"
      services:
      freshrss:
      container_name: freshrss
      image: lscr.io/linuxserver/freshrss:latest
      ports:
      - <unused port number like 3777>:80
      environment:
      - TZ: 'America/Los_Angeles'
      - PUID=<UID of Docker User>
      - PGID=<GID of Docker User>
      volumes:
      - '<absolute path to shared config folder>/FreshRSS:/config'
      restart: unless-stopped
      You’ll need to replace <UID of Docker User> and <GID of Docker User> with the UID and GID of the Docker user you created (which will be 1000 and 100 if you followed the steps I laid out, see Step 10 in the section “Docker and OMV-Extras” in my initial post)

      I live in the Bay Area so I set the timezone TZ to America/Los_Angeles. You can find yours here.

      Under ports:, make sure to add an unused port number (I went with 3777).

      Replace <absolute path to shared config folder> with the absolute path to the config folder where you want Docker-installed applications to store their configuration information (accessible by going to [Storage > Shared Folders] in the administrative panel).

      Once you’re done, hit Save and you should be returned to your list of Docker compose files for the next step. Notice that the new FreshRSS entry you created has a Down status, showing the container has yet to be initialized.
    • To start your FreshRSS container, click on the new FreshRSS entry and press the (up) button. This will create the container, download any files needed, and run it.

      And that’s it! To prove it worked, go to your-servers-static-ip-address:3777 from a browser that’s on the same network as your server (replacing 3777 if you picked a different port in the configuration above) and you should see the FreshRSS installation page (see below)
    • You can skip this step if you didn’t (as I laid out in my last post) set up Pihole and local DNS / Nginx proxy or if you don’t care about having a user-readable domain name for FreshRSS. But, assuming you do and you followed my instructions, open up WeTTy (which you can do by going to wetty.home in your browser if you followed my instructions or by going to [Services > WeTTY] from OpenMediaVault administrative panel and pressing Open UI button in the main panel) and login as the root user. Run:
      cd /etc/nginx/conf.d
      ls
      Pick out the file you created before for your domains and run
      nano <your file name>.conf
      This opens up the text editor nano with the file you just listed. Use your cursor to go to the very bottom of the file and add the following lines (making sure to use tabs and end each line with a semicolon)
      server {
      listen 80;
      server_name <rss.home or the domain you'd like to use>;
      location / {
      proxy_pass http://<your-server-static-ip>:<FreshRSS port number>;
      }
      }
      And then hit Ctrl+X to exit, Y to save, and Enter to overwrite the existing file. Then in the command line run the following to restart Nginx with your new configuration loaded.
      systemctl restart nginx
      Now, if your server sees a request for rss.home (or whichever domain you picked), it will direct them to FreshRSS.

      Login to your Pihole administrative console (you can just go to pi.hole in a browser) and click on [Local DNS > DNS Records] from the sidebar. Under the section called Add a new domain/IP combination, fill out under Domain: the domain you just added above (i.e. rss.home) and next to IP Address: you should add your server’s static IP address. Press the Add button and it will show up below.

      To make sure it all works, enter the domain you just added (rss.home if you went with my default) in a browser and you should see the FreshRSS installation page.
    • Completing installation is easy. Thanks to the use of Docker, all of your PHP and files will be configured accurately so you should be able to proceed with the default options. Unless you’re planning to store millions of articles served to dozens of people, the default option of SQLite as database type should be sufficient in Step 3 (see below)


      This leaves the final task of configuring a username and password (and, again, unless you’re serving this to many users whom you’re worried will hack you, the default authentication method of Web form will work)


      Finally, press Complete installation and you will be taken to the login page:

    Advice

    Once you’ve logged in with the username and password you just set, the world is your oyster. If you’ve ever used an RSS reader, the interface is pretty straightforward, but the key is to use the Subscription management button in the interface to add RSS feeds and categories as you see fit. FreshRSS will, on a regular basis, look for new content from those feeds and put it in the main interface. You can then step through and stay up to date on the sites that matter to you. There are a lot more features you can learn about from the FreshRSS documentation.

    On my end, I’d recommend a few things:

    • How to find the RSS feed for a page — Many (but not all) blog/news pages have RSS feeds. The most reliable way to find it is to right click on the page you’re interested in from your browser and select View source (on Chrome you’d hit Ctrl+U). Hit Ctrl+F to trigger a search and look for rss. If there is an RSS feed, you’ll see something that says "application/rss+xml" and near it will usually be a URL that ends in /rss or /feed or something like that (my blog, for instance, hosted on benjamintseng.com has a feed at benjamintseng.com/rss).
      • Once you open up the feed,
    • Learn the keyboard shortcuts — they’re largely the same as found on Gmail (and the old Google Reader) but they make using this much faster:
      • j to go to the next article
      • k to go to the previous article
      • r to toggle if something is read or not
      • v to open up the original page in a new tab
    • Use the normal view, sorted oldest first — (you do this by tapping the Settings gear in the upper-right of the interface and then selecting Reading under Configuration in the menu). Even though I’ve aggressively curated the feeds I subscribe to, there is a lot of material and the “normal view” allows me to quickly browse headlines to see which ones are more worth my time at a glance. I can also use my mouse to selectively mark somethings as read so I can take a quick Inbox Zero style approach to my feeds. This allows me to think of the j shortcut as “move forward in time” and the k shortcut as “move backwards” and I can use the pulldown menu next to Mark as read button to mark content older than one day / one week as read if I get overwhelmed.
    • Subscribe to good feeds — probably a given, but here are a few I follow to get you started:

    I hope this helps you get started!

    (If you’re interested in how to setup a home server on OpenMediaVault or how to self-host different services, check out all my posts on the subject)

  • Stocks for the Long Run? Maybe not all the Time

    One of the core assumptions of modern financial planning and finance is that stocks have better returns over the long-run than bonds.

    The reason “seems” obvious: stocks are riskier. There is, after all, a greater chance of going to zero since bond investors come before stock investors in a legal line to get paid out after a company fails. Furthermore, stocks let an investor participate in the upside (if a company grows rapidly) whereas bonds limits your upside to the interest payments.

    A fascinating article by Santa Clara University Professor Edward McQuarrie published in late 2023 in Financial Analysts Journal puts that entire foundation into doubt. McQuarrie collects a tremendous amount of data to compute total US stock and bond returns going back to 1792 using newly available historical records and data from periodicals from that timeframe. The result is a lot more data including:

    • coverage of bonds and stocks traded outside of New York
    • coverage of companies which failed (such as The Second Bank of the United States which, at one point, was ~30% of total US market capitalization and unceremoniously failed after its charter was not renewed)
    • includes data on dividends (which were omitted in many prior studies)
    • calculates results on a capitalization-weighted basis (as opposed to price-weighted / equal-weighted which is easier to do but less accurately conveys returns investors actually see)

    The data is fascinating, as it shows that, contrary to the opinion of most “financial experts” today, it is not true that stocks always beat bonds in the long-run. In fact, much better performance for stocks in the US seems to be mainly a 1940s-1980s phenomena (see Figure 1 from the paper below)

    Stock and bond performance (normalized to $1 in 1792, and renormalized in 1982) on a logarithmic scale
    Source: Figure 1, McQuarrie et al

    Put another way, if you had looked at stocks vs bonds in 1862, the sensible thing to tell someone was “well, some years stocks do better, some years bonds do better, but over the long haul, it seems bonds do better (see Table 1 from the paper below).

    The exact opposite of what you would tell them today / having only looked at the post-War world.

    Source: Table 1, McQuarrie et al

    This problem is compounded if you look at non-US stock returns where, even after excluding select stock market performance periods due to war (i.e. Germany and Japan following World War II), focusing even on the last 5 decades shows comparable performance for non-US stocks as non-US government bonds.

    Even assumptions viewed as sacred, like how stocks and bonds can balance each other out because their returns are poorly correlated, shows huge variation over history — with the two assets being highly correlated pre-Great Depression, but much less so (and swinging wildly) afterwards (see Figure 6 below)

    Stock and Bond Correlation over Time
    Source: Figure 6, McQuarrie et al

    Now neither I nor the paper’s author are suggesting you change your fundamental investment strategy as you plan for the long-term (I, for one, intend to continue allocating a significant fraction of my family’s assets to stocks for now).

    But, beyond some wild theorizing on why these changes have occurred throughout history, what this has reminded me is that the future can be wildly unknowable. Things can work one way and then suddenly stop. As McQuarrie pointed out recently in a response to a Morningstar commenter, “The rate of death from disease and epidemics stayed at a relatively high and constant level from 1793 to 1920. Then advances in modern medicine fundamentally and permanently altered the trajectory … or so it seemed until COVID-19 hit in February 2020.”



    Stocks for the Long Run? Sometimes Yes, Sometimes No
    Edward F. McQuarrie | Financial Analysts Journal

  • InVision founder retro

    As reported in The Information a few days ago, former design tool giant InVision, once valued at $2 billion, is shutting down at the end of this year.

    While much of the commentary has been about Figma’s rapid rise and InVision’s inability to respond, I saw this post on Twitter/X from one of InVision’s founders Clark Valberg about what happened. The screenshotted message he left is well-worth a read. It is a great (if slightly self-serving / biased) retrospective.

    As someone who was a mere bystander during the events (as a newly minted Product Manager working with designers), it felt very true to the moment.

    I remember being blown away by how the entire product design community moved to Sketch (from largely Adobe-based solutions) and then, seemingly overnight, from Sketch to Figma.

    While it’s fair to criticize the leadership for not seeing web-based design as a place to invest, I think the piece just highlights how because it wasn’t a direct competitor to InDesign (but to Sketch & Adobe XD) and because the idea of web-based wasn’t on anyone’s radar at the time, it became a lethal blind spot for the company. It’s Tech Strategy 101 and perfectly highlights Andy Grove’s old saying: “(in technology,) only the paranoid survive”.


    Tweet from @ClarkValberg
    Clark Valberg | Twitter/X

  • The only 3 things a startup CEO needs to master

    So, you watched Silicon Valley and read some articles on Techcrunch and you envision yourself as a startup CEO 🤑. What does it take to succeed? Great engineering skills? Salesmanship? Financial acumen?

    As someone who has been on both sides of the table (as a venture investor and on multiple startup executive leadership teams), there are three — and only three — things a startup CEO needs to master. In order of importance:

    1. Raise Money from Investors (now and in the future): The single most important job of a startup CEO is to secure funding from investors. Funding is the lifeblood of a company, and raising it is a job that only the CEO can drive. Not being great at it means slower growth / fewer resources, regardless of how brilliant you are, or how great your vision. Being good at raising money buys you a lot of buffer in every other area.
    2. Hire Amazing People into the Right Roles (and retain them!): No startup, no matter how brilliant the CEO, succeeds without a team. Thus, recruiting the right people into the right positions is the second most vital job of a CEO. Without the right people in place, your plans are not worth the paper on which they are written. Even if you have the right people, if they are not entrusted with the right responsibilities or they are unhappy, the wrong outcomes will occur. There is a reason that when CEOs meet to trade notes, they oftentimes trade recruiting tips.
    3. Inspire the Team During Tough Times: Every startup inevitably encounters stormy seas. It could be a recession causing a slowdown, a botched product launch, a failed partnership, or the departure of key employees. During these challenging times, the CEO’s job is to serve as chief motivator. Teams that can resiliently bounce back after crises can stand a better chance of surviving until things turn a corner.

    It’s a short list. And it doesn’t include:

    • deep technical expertise
    • an encyclopedic knowledge of your industry
    • financial / accounting skills
    • marketing wizardry
    • design talent
    • intellectual property / legal acumen

    It’s not that those skills aren’t important for building a successful company — they are. It’s not even that these skills aren’t helpful for a would-be startup CEO — these skills would be valuable for anyone working at a startup to have. For startup CEOs in particular, these skills can help sell investors as to why the CEO is the right one to invest in or convince talent to join or inspire the team that the strategy a CEO has chosen is the right one.

    But, the reality is that these skills can be hired into the company. They are not what separates great startup CEOs from the rest of the pack.

    What makes a startup CEO great is their ability to nail the jobs that cannot be delegated. And that boils down to fundraising, hiring and retaining the best, and lifting spirits when things are tough. And that is the job.

    After all, startup investors write checks because they believe in the vision and leadership of a CEO, not a lackey. And startup employees expect to work for a CEO with a vision, not just a mouthpiece.

    So, want to become a startup CEO? Work on:

    • Storytelling — Learn how to tell stories that compel listeners. This is vital for fundraising (convincing investors to take a chance on you because of your vision), but also for recruiting & retaining people as well as inspiring a team during difficult times.
    • Reading People — Learn how to accurately read people. You can’t hire a superstar employee with other options, retain an unhappy worker through tough times, or overcome an investor’s concerns unless you understand their position. This means being attentive to what they tell you directly (i.e., over email, text, phone / video call, or in person, etc.) as well as paying attention to what they don’t (i.e., body language, how they act, what topics they discussed vs. didn’t, etc.).
    • Prioritization — Many startup CEOs got to where they are because they were superstars at one or more of the “unnecessary to be a great startup CEO” skills. But, continuing to focus on that skill and ignoring the skills that a startup CEO needs to be stellar at confuses the path to the starting point with the path to the finish line. It is the CEO’s job to prioritize those tasks that they cannot delegate and to ruthlessly delegate everything else.