This one chart (published in Canary Media) illustrates both the case for optimism for our ability to deal with climate change as well as a clear case of how geopolitical pressures can dramatically impact energy choices: the rapid increase in use of renewable energy (mainly at the expense of fossil fuels) as source of electricity in the EU.
The cleanest sources of electricity could soon make up the largest share of electricity generation in the European Union. Wind and solar made huge strides last year, producing more than one-quarter of the EU’s electricity for the first time, while fossil fuel generation plummeted. Power-sector emissions fell by a record 19 percent in the region last year.
If you’re like me, every few months you have to do something with PDFs:
Merge them
Rotate them
Crop them
Add / remove a password
Move pages around / remove pages
Sign them
Add text / annotations to them
This ends up either being a pain to do (via some combination of screen shots, printing, scanning, and exporting) or oddly expensive (buying a license to Adobe Acrobat or another pay-PDF manipulation tool).
In the hopes that this helps anyone who has ever had to do some PDF manipulation work done, I will share how I set up Stirling PDF tools (on my OpenMediaVault v6 home server)
Stirling PDF
Stirling tools started as a ChatGPT project which has since turned into an open source project with millions of Docker pulls. It handles everything through a simple web interface and on the server (no calls to any remote service). Depending on the version you install, you can also get access to tools converting common Office files to PDF and OCR (optical character recognition, where software can recognize text — even handwriting — in images).
And, best of all, it’s free! (As in beer and as in freedom!)
Installation
To install the Stirling Tools on OpenMediaVault:
If you haven’t already, make sure you have OMV Extras and Docker Compose installed (refer to the section Docker and OMV-Extras in my previous post, you’ll want to follow all 10 steps as I refer to different parts of the process throughout this post) and have a static local IP address assigned to your server.
Login to your OpenMediaVault web admin panel, and then go to [Services > Compose > Files] in the sidebar. Press the button in the main interface to add a new Docker compose file.
Under Name put down Stirling and under File, adapt the following (making sure the number of spaces are consistent)
version: "3.3" services: stirling-pdf: image: frooodle/s-pdf:latest ports: - <unused port number like 7331>:8080 environment: - DOCKER_ENABLE_SECURITY=false volumes: - '<absolute path to shared config folder>/tesseract:/usr/share/tessdata' - '<absolute path to shared config folder>/Stirling/configs:/config' - '<absolute path to shared config folder>/Stirling/customFiles:/customFiles' - '<absolute path to shared config folder>/Stirling/logs:/logs' restart: unless-stopped
Under ports:, make sure to add an unused port number (I went with 7331).
Replace <absolute path to shared config folder> with the absolute path to the config folder where you want Docker-installed applications to store their configuration information (accessible by going to [Storage > Shared Folders] in the administrative panel). You’ll notice there’s an extra line in there for tessdata — this corresponds to the stored files for the Tesseract tool that Stirling uses for OCR
Once you’re done, hit Save and you should be returned to your list of Docker compose files for the next step. Notice that the new FreshRSS entry you created has a Down status, showing the container has yet to be initialized.
To start your Stirling container, click on the new Stirling entry and press the (up) button. This will create the container, download any files needed, and run it.
And that’s it! To prove it worked, go to your-servers-static-ip-address:7331 from a browser that’s on the same network as your server (replacing 7331 if you picked a different port in the configuration above) and you should see the Stirling tools page (see below)
You can skip this step if you didn’t (as I laid out in my last post) set up Pihole and local DNS / Nginx proxy or if you don’t care about having a user-readable domain name for these PDF tools. But, assuming you do and you followed my instructions, open up WeTTy (which you can do by going to wetty.home in your browser if you followed my instructions or by going to [Services > WeTTY] from OpenMediaVault administrative panel and pressing Open UI button in the main panel) and login as the root user. Run:
cd /etc/nginx/conf.d ls
Pick out the file you created before for your domains and run
nano <your file name>.conf
This opens up the text editor nano with the file you just listed. Use your cursor to go to the very bottom of the file and add the following lines (making sure to use tabs and end each line with a semicolon)
server { listen 80; server_name <pdf.home or the domain you'd like to use>; location / { proxy_pass http://<your-server-static-ip>:<PDF port number>; } }
And then hit Ctrl+X to exit, Y to save, and Enter to overwrite the existing file. Then in the command line run the following to restart Nginx with your new configuration loaded.
systemctl restart nginx
Now, if your server sees a request for pdf.home (or whichever domain you picked), it will direct them to the PDF tools.
Login to your Pihole administrative console (you can just go to pi.hole in a browser) and click on [Local DNS > DNS Records] from the sidebar. Under the section called Add a new domain/IP combination, fill out under Domain: the domain you just added above (i.e. pdf.home) and next to IP Address: you should add your server’s static IP address. Press the Add button and it will show up below.
To make sure it all works, enter the domain you just added (pdf.home if you went with my default) in a browser and you should see the Stirling PDF tools page.
Lastly, to make the PDF tools actually useable, you’ll want to increase the maximum allowable file upload size in OpenMediaVault’s default webserver Nginx (so that you can use the tools with PDFs larger than the incredibly tiny default minimum size of 1 MB). To do this, log back into your server using WeTTy (follow the instructions above) and run:
cd /etc/nginx/ nano nginx.conf
This opens up the text editor nano with the master configuration file for Nginx. Use your cursor to go to some spot after http { but before the closing }. This configures how Nginx will process HTTP requests (basically anything coming from a website). Enter the two lines below (making sure to use tabs and end the second line with a semicolon; to be clear "... stuff that comes by default..." is just placeholder text that you don’t need to write or add, it’s just to show that the two lines you enter need to be inside the {})
http { ... stuff that comes by default ... ## adding larger file upload limit client_max_body_size 100M; ... more stuff that comes by default ... }
And then hit Ctrl+X to exit, Y to save, and Enter to overwrite the existing file. Then in the command line run the following to restart Nginx with your new configuration loaded.
systemctl restart nginx
Now, the PDF tools can handle file uploads up to 100 MB in size!
Lastly, to make full use of OCR, you’ll want to download the language files you’re most interested in from Tesseract repository (the slower but more accurate files are here and the faster but less accurate files are here; simple click on the file you’re interested in from the list and then select Download from the “three dot” menu or by hitting Ctrl+Shift+s) and place them in the /tesseract folder you mapped in the Docker compose file. To verify that those files are properly loaded, simply go to the PDF tools, select the one called OCR / Cleanup scans (or visit <URL to PDF tools>/ocr-pdf) and the language files that you’ve downloaded should show up as a checkbox.
And now, you have a handy set of PDF tools in your (home server) back pocket!
(If you’re interested in how to setup a home server on OpenMediaVault or how to self-host different services, check out all my posts on the subject)
While much attention is (rightly) focused on the role of TSMC (and its rivals Samsung and Intel) in “leading edge” semiconductor technology, the opportunity at the so-called “lagging edge” — older semiconductor process technologies which continue to be used — is oftentimes completely ignored.
The reality of the foundry model is that fab capacity is expensive to build and so the bulk of the profit made on a given process technology investment is when it’s years old. This is a natural consequence of three things:
Very few semiconductor designers have the R&D budget or the need to be early adopters of the most advanced technologies. (That is primarily relegated to the sexiest advanced CPUs, FPGAs, and GPUs, but ignores the huge bulk of the rest of the semiconductor market)
Because only a small handful of foundries can supply “leading edge” technologies and because new technologies have a “yield ramp” (where the technology goes from low yield to higher as the foundry gets more experience), new process technologies are meaningfully more expensive.
Some products have extremely long lives and need to be supported for decade-plus (i.e. automotive, industrial, and military immediately come to mind)
As a result, it was very rational for GlobalFoundries (formerly AMD’s in-house fab) to abandon producing advanced semiconductor technologies in 2018 to focus on building a profitable business at the lagging edge. Foundries like UMC and SMIC have largely made the same choice.
This means giving up on some opportunities (those that require newer technologies) — as GlobalFoundries is finding recently in areas like communications and data center — but provided you have the service capability and capacity, can still lead to not only a profitable outcome, but one which is still incredibly important to the increasingly strategic semiconductor space.
When GlobalFoundries abandoned development of its 7 nm-class process technology in 2018 and refocused on specialty process technologies, it ceased pathfinding, research, and development of all technologies related to bleeding-edge sub-10nm nodes. At the time, this was the correct (and arguably only) move for the company, which was bleeding money and trailing behind both TSMC and Samsung in the bleeding-edge node race. But in the competitive fab market, that trade-off for reduced investment was going to eventually have consequences further down the road, and it looks like those consequences are finally starting to impact the company. In a recent earnings call, GlobalFoundries disclosed that some of the company’s clients are leaving for other foundries, as they adopt sub-10nm technologies faster than GlobalFoundries expected.
Every standard products company (like NVIDIA) eventually gets lured by the prospect of gaining large volumes and high margins of a custom products business.
And every custom products business wishes they could get into standard products to cut their dependency on a small handful of customers and pursue larger volumes.
Given the above and the fact that NVIDIA did used to effectively build custom products (i.e. for game consoles and for some of its dedicated autonomous vehicle and media streamer projects) and the efforts by cloud vendors like Amazon and Microsoft to build their own Artificial Intelligence silicon it shouldn’t be a surprise to anyone that they’re pursuing this.
Or that they may eventually leave this market behind as well.
While using NVIDIA’s A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA’s AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.
Fascinating data from the BLS on which jobs have the greatest share of a particular gender or race. The following two charts are from the WSJ article I linked. I never would have guessed that speech-language pathologists (women), property appraisers (white), postal service workers (black), or medical scientists (Asian) would have such a preponderance of a particular group.
The Bureau of Labor Statistics each year publishes data looking at the gender and racial composition of hundreds of occupations, offering a snapshot of how workers sort themselves into many of the most important jobs in the country.
There are sociology textbooks’ worth of explanations for these numbers. One clear conclusion: Many occupations skew heavily toward one gender or race, leading to a workforce where 96.7% of preschool and kindergarten teachers are women, two-thirds of manicurists and pedicurists are Asian, and 92.4% of pilots and flight engineers are white.
Commercial real estate (and, by extension, community banks) are in a world of hurt as hybrid/remote work, higher interest rates, and property bubbles deflating/popping collide…
Many banks still prefer to work out deals with existing landlords, such as offering loan extensions in return for capital reinvestments toward building upgrades. Still, that approach may not be viable in many cases; big companies from Blackstone to a unit of Pacific Investment Management Co. have walked away from or defaulted on properties they don’t want to pour more money into. In some cases, buildings may be worth even less today than the land they sit on.
“When people hand back keys, that’s not the end of it — the equity is wiped but the debt is also massively impaired,” said Dan Zwirn, CEO of asset manager Arena Investors, which invests in real estate debt. “You’re talking about getting close to land value. In certain cases people are going to start demolishing things.”
stores and streams media (even for when I’m out of the house)
acts as network storage (for our devices to store and share files)
serves as a personal RSS/newsreader
The last one is new since my last post and, in the hopes that this helps others exploring what they can selfhost or who maybe have a home server and want to start deploying services, I wanted to share how I set up FreshRSS, a self-hosted RSS reader (on an OpenMediaVault v6 server)
Why a RSS Reader?
Like many who used it, I was a massive Google Reader fan. Until 2013 when it was unceremoniously shut down, it was probably the most important website I used after Gmail.
I experimented with other RSS clients over the years, but found that I did not like most commercial web-based clients (which were focused on serving ads or promoting feeds I was uninterested in) or desktop clients (which were difficult to sync between devices). So, I switched to other alternatives (i.e. Twitter) for a number of years.
FreshRSS
Wanting to return to the simpler days where I could simply follow the content I was interested in, I stumbled on the idea of self-hosting an RSS reader. Looking at the awesome-selfhosted feed reader category, I looked at the different options and chose to go with FreshRSS for a few reasons:
It had the most Github stars of any feed reader — an imperfect but reasonable sign of a well-liked project.
If you haven’t already, make sure you have OMV Extras and Docker Compose installed (refer to the section Docker and OMV-Extras in my previous post, you’ll want to follow all 10 steps as I refer to different parts of the process throughout this post) and have a static local IP address assigned to your server.
Login to your OpenMediaVault web admin panel, and then go to [Services > Compose > Files] in the sidebar. Press the button in the main interface to add a new Docker compose file.
Under Name put down FreshRSS and under File, adapt the following (making sure the number of spaces are consistent)
version: "2.1" services: freshrss: container_name: freshrss image: lscr.io/linuxserver/freshrss:latest ports: - <unused port number like 3777>:80 environment: - TZ: 'America/Los_Angeles' - PUID=<UID of Docker User> - PGID=<GID of Docker User> volumes: - '<absolute path to shared config folder>/FreshRSS:/config' restart: unless-stopped
I live in the Bay Area so I set the timezone TZ to America/Los_Angeles. You can find yours here.
Under ports:, make sure to add an unused port number (I went with 3777).
Replace <absolute path to shared config folder> with the absolute path to the config folder where you want Docker-installed applications to store their configuration information (accessible by going to [Storage > Shared Folders] in the administrative panel).
Once you’re done, hit Save and you should be returned to your list of Docker compose files for the next step. Notice that the new FreshRSS entry you created has a Down status, showing the container has yet to be initialized.
To start your FreshRSS container, click on the new FreshRSS entry and press the (up) button. This will create the container, download any files needed, and run it.
And that’s it! To prove it worked, go to your-servers-static-ip-address:3777 from a browser that’s on the same network as your server (replacing 3777 if you picked a different port in the configuration above) and you should see the FreshRSS installation page (see below)
You can skip this step if you didn’t (as I laid out in my last post) set up Pihole and local DNS / Nginx proxy or if you don’t care about having a user-readable domain name for FreshRSS. But, assuming you do and you followed my instructions, open up WeTTy (which you can do by going to wetty.home in your browser if you followed my instructions or by going to [Services > WeTTY] from OpenMediaVault administrative panel and pressing Open UI button in the main panel) and login as the root user. Run:
cd /etc/nginx/conf.d ls
Pick out the file you created before for your domains and run
nano <your file name>.conf
This opens up the text editor nano with the file you just listed. Use your cursor to go to the very bottom of the file and add the following lines (making sure to use tabs and end each line with a semicolon)
server { listen 80; server_name <rss.home or the domain you'd like to use>; location / { proxy_pass http://<your-server-static-ip>:<FreshRSS port number>; } }
And then hit Ctrl+X to exit, Y to save, and Enter to overwrite the existing file. Then in the command line run the following to restart Nginx with your new configuration loaded.
systemctl restart nginx
Now, if your server sees a request for rss.home (or whichever domain you picked), it will direct them to FreshRSS.
Login to your Pihole administrative console (you can just go to pi.hole in a browser) and click on [Local DNS > DNS Records] from the sidebar. Under the section called Add a new domain/IP combination, fill out under Domain: the domain you just added above (i.e. rss.home) and next to IP Address: you should add your server’s static IP address. Press the Add button and it will show up below.
To make sure it all works, enter the domain you just added (rss.home if you went with my default) in a browser and you should see the FreshRSS installation page.
Completing installation is easy. Thanks to the use of Docker, all of your PHP and files will be configured accurately so you should be able to proceed with the default options. Unless you’re planning to store millions of articles served to dozens of people, the default option of SQLite as database type should be sufficient in Step 3 (see below)
This leaves the final task of configuring a username and password (and, again, unless you’re serving this to many users whom you’re worried will hack you, the default authentication method of Web form will work)
Finally, press Complete installation and you will be taken to the login page:
Advice
Once you’ve logged in with the username and password you just set, the world is your oyster. If you’ve ever used an RSS reader, the interface is pretty straightforward, but the key is to use the Subscription management button in the interface to add RSS feeds and categories as you see fit. FreshRSS will, on a regular basis, look for new content from those feeds and put it in the main interface. You can then step through and stay up to date on the sites that matter to you. There are a lot more features you can learn about from the FreshRSS documentation.
On my end, I’d recommend a few things:
How to find the RSS feed for a page — Many (but not all) blog/news pages have RSS feeds. The most reliable way to find it is to right click on the page you’re interested in from your browser and select View source (on Chrome you’d hit Ctrl+U). Hit Ctrl+F to trigger a search and look for rss. If there is an RSS feed, you’ll see something that says "application/rss+xml" and near it will usually be a URL that ends in /rss or /feed or something like that (my blog, for instance, hosted on benjamintseng.com has a feed at benjamintseng.com/rss).
Once you open up the feed,
Learn the keyboard shortcuts — they’re largely the same as found on Gmail (and the old Google Reader) but they make using this much faster:
j to go to the next article
k to go to the previous article
r to toggle if something is read or not
v to open up the original page in a new tab
Use the normal view, sorted oldest first — (you do this by tapping the Settings gear in the upper-right of the interface and then selecting Reading under Configuration in the menu). Even though I’ve aggressively curated the feeds I subscribe to, there is a lot of material and the “normal view” allows me to quickly browse headlines to see which ones are more worth my time at a glance. I can also use my mouse to selectively mark somethings as read so I can take a quick Inbox Zero style approach to my feeds. This allows me to think of the j shortcut as “move forward in time” and the k shortcut as “move backwards” and I can use the pulldown menu next to Mark as read button to mark content older than one day / one week as read if I get overwhelmed.
Subscribe to good feeds — probably a given, but here are a few I follow to get you started:
Paul Graham’s Essays (feed URL): love or hate him, Paul Graham does some great work writing very simple but thoughtful pieces
Collaborative Fund blog (feed URL): Morgan Housel is one of my favorite writers and he is responsible for most of the posts here
Benjamin Tseng’s blog (feed URL): you’re already here, aren’t you? 😇 You can also subscribe via email. I write about interesting things I’m reading, how-to guides like this, and my thoughts on tech / science / finance
I hope this helps you get started!
(If you’re interested in how to setup a home server on OpenMediaVault or how to self-host different services, check out all my posts on the subject)
One of the core assumptions of modern financial planning and finance is that stocks have better returns over the long-run than bonds.
The reason “seems” obvious: stocks are riskier. There is, after all, a greater chance of going to zero since bond investors come before stock investors in a legal line to get paid out after a company fails. Furthermore, stocks let an investor participate in the upside (if a company grows rapidly) whereas bonds limits your upside to the interest payments.
A fascinating article by Santa Clara University Professor Edward McQuarrie published in late 2023 in Financial Analysts Journal puts that entire foundation into doubt. McQuarrie collects a tremendous amount of data to compute total US stock and bond returns going back to 1792 using newly available historical records and data from periodicals from that timeframe. The result is a lot more data including:
coverage of bonds and stocks traded outside of New York
coverage of companies which failed (such as The Second Bank of the United States which, at one point, was ~30% of total US market capitalization and unceremoniously failed after its charter was not renewed)
includes data on dividends (which were omitted in many prior studies)
calculates results on a capitalization-weighted basis (as opposed to price-weighted / equal-weighted which is easier to do but less accurately conveys returns investors actually see)
The data is fascinating, as it shows that, contrary to the opinion of most “financial experts” today, it is not true that stocks always beat bonds in the long-run. In fact, much better performance for stocks in the US seems to be mainly a 1940s-1980s phenomena (see Figure 1 from the paper below)
Put another way, if you had looked at stocks vs bonds in 1862, the sensible thing to tell someone was “well, some years stocks do better, some years bonds do better, but over the long haul, it seems bonds do better (see Table 1 from the paper below).
The exact opposite of what you would tell them today / having only looked at the post-War world.
This problem is compounded if you look at non-US stock returns where, even after excluding select stock market performance periods due to war (i.e. Germany and Japan following World War II), focusing even on the last 5 decades shows comparable performance for non-US stocks as non-US government bonds.
Even assumptions viewed as sacred, like how stocks and bonds can balance each other out because their returns are poorly correlated, shows huge variation over history — with the two assets being highly correlated pre-Great Depression, but much less so (and swinging wildly) afterwards (see Figure 6 below)
Now neither I nor the paper’s author are suggesting you change your fundamental investment strategy as you plan for the long-term (I, for one, intend to continue allocating a significant fraction of my family’s assets to stocks for now).
But, beyond some wild theorizing on why these changes have occurred throughout history, what this has reminded me is that the future can be wildly unknowable. Things can work one way and then suddenly stop. As McQuarrie pointed out recently in a response to a Morningstar commenter, “The rate of death from disease and epidemics stayed at a relatively high and constant level from 1793 to 1920. Then advances in modern medicine fundamentally and permanently altered the trajectory … or so it seemed until COVID-19 hit in February 2020.”
If stocks are risky, investors will demand a premium to invest. But if stocks cease to be risky once held for a long enough period—if stocks are certain to have strong returns after 20 years and certain to outperform bonds—then investors have no reason to expect a premium over these longer periods, given that no shortfall risk had to be assumed. The expanded historical record shows that stocks can perform poorly in absolute terms and underperform bonds, whether the holding period is 20, 30, 50, or 100 years. That documentation of risk resolves the conundrum.
While much of the commentary has been about Figma’s rapid rise and InVision’s inability to respond, I saw this post on Twitter/X from one of InVision’s founders Clark Valberg about what happened. The screenshotted message he left is well-worth a read. It is a great (if slightly self-serving / biased) retrospective.
As someone who was a mere bystander during the events (as a newly minted Product Manager working with designers), it felt very true to the moment.
I remember being blown away by how the entire product design community moved to Sketch (from largely Adobe-based solutions) and then, seemingly overnight, from Sketch to Figma.
While it’s fair to criticize the leadership for not seeing web-based design as a place to invest, I think the piece just highlights how because it wasn’t a direct competitor to InDesign (but to Sketch & Adobe XD) and because the idea of web-based wasn’t on anyone’s radar at the time, it became a lethal blind spot for the company. It’s Tech Strategy 101 and perfectly highlights Andy Grove’s old saying: “(in technology,) only the paranoid survive”.
Hey Jason…
“Clark from InVision” here…
I’ve been somewhat removed from the InVision business since transitioning out ~2 years ago, and this is the first time I’ve reacted to the latest news publicly. I’m choosing to do so here because in many ways your post is a full-circle moment for me. MANY (perhaps most) of the underlying philosophies that drove InVision from the very beginning were inspired by my co-founder @BenNadel and I reading and re-reading Getting Real. It was our early-stage hymnal.
Apologies for steam of consciousness rant and admitted inherent bias — I’m a founder after all 🙂
So, you watched Silicon Valley and read some articles on Techcrunch and you envision yourself as a startup CEO 🤑. What does it take to succeed? Great engineering skills? Salesmanship? Financial acumen?
As someone who has been on both sides of the table (as a venture investor and on multiple startup executive leadership teams), there are three — and only three — things a startup CEO needs to master. In order of importance:
Raise Money from Investors(now and in the future): The single most important job of a startup CEO is to secure funding from investors. Funding is the lifeblood of a company, and raising it is a job that only the CEO can drive. Not being great at it means slower growth / fewer resources, regardless of how brilliant you are, or how great your vision. Being good at raising money buys you a lot of buffer in every other area.
Hire Amazing People into the Right Roles(and retain them!): No startup, no matter how brilliant the CEO, succeeds without a team. Thus, recruiting the right people into the right positions is the second most vital job of a CEO. Without the right people in place, your plans are not worth the paper on which they are written. Even if you have the right people, if they are not entrusted with the right responsibilities or they are unhappy, the wrong outcomes will occur. There is a reason that when CEOs meet to trade notes, they oftentimes trade recruiting tips.
Inspire the Team During Tough Times: Every startup inevitably encounters stormy seas. It could be a recession causing a slowdown, a botched product launch, a failed partnership, or the departure of key employees. During these challenging times, the CEO’s job is to serve as chief motivator. Teams that can resiliently bounce back after crises can stand a better chance of surviving until things turn a corner.
It’s a short list. And it doesn’t include:
deep technical expertise
an encyclopedic knowledge of your industry
financial / accounting skills
marketing wizardry
design talent
intellectual property / legal acumen
It’s not that those skills aren’t important for building a successful company — they are. It’s not even that these skills aren’t helpful for a would-be startup CEO — these skills would be valuable for anyone working at a startup to have. For startup CEOs in particular, these skills can help sell investors as to why the CEO is the right one to invest in or convince talent to join or inspire the team that the strategy a CEO has chosen is the right one.
But, the reality is that these skills can be hired into the company. They are not what separates great startup CEOs from the rest of the pack.
What makes a startup CEO great is their ability to nail the jobs that cannot be delegated. And that boils down to fundraising, hiring and retaining the best, and lifting spirits when things are tough. And that is the job.
After all, startup investors write checks because they believe in the vision and leadership of a CEO, not a lackey. And startup employees expect to work for a CEO with a vision, not just a mouthpiece.
So, want to become a startup CEO? Work on:
Storytelling — Learn how to tell stories that compel listeners. This is vital for fundraising (convincing investors to take a chance on you because of your vision), but also for recruiting & retaining people as well as inspiring a team during difficult times.
Reading People — Learn how to accurately read people. You can’t hire a superstar employee with other options, retain an unhappy worker through tough times, or overcome an investor’s concerns unless you understand their position. This means being attentive to what they tell you directly (i.e., over email, text, phone / video call, or in person, etc.) as well as paying attention to what they don’t (i.e., body language, how they act, what topics they discussed vs. didn’t, etc.).
Prioritization — Many startup CEOs got to where they are because they were superstars at one or more of the “unnecessary to be a great startup CEO” skills. But, continuing to focus on that skill and ignoring the skills that a startup CEO needs to be stellar at confuses the path to the starting point with the path to the finish line. It is the CEO’s job to prioritize those tasks that they cannot delegate and to ruthlessly delegate everything else.
Randomized controlled trials (RCTs) are the “gold standard” in healthcare for proving a treatment works. And for good reason. A well-designed and well-powered (i.e., large enough) clinical trial establishes what is really due to a treatment as opposed to another factor (e.g., luck, reversion to the mean, patient selection, etc.), and it’s a good thing that drug regulation is tied to successful trial results.
But, there’s one wrinkle. Randomized controlled trials are not reality.
RCTs are tightly controlled, where only specific patients (those fulfilling specific “inclusion criteria”) are allowed to participate. Follow-up is organized and adherence to protocol is tightly tracked. Typically, related medical care is also provided free of cost.
This is exactly what you want from a scientific and patient volunteer safety perspective, but, as we all know, the real world is messier. In the real world:
Physicians prescribe treatments to patients that don’t have to fit the exact inclusion criteria of the clinical trial. After all, many clinical trials exclude people who are extremely sick or who are children or pregnant.
Patients may not take their designated treatment on time or in the right dose … and nobody finds out.
Follow-up on side effects and progress is oftentimes random
Cost and free time considerations may change how and when a patient comes in
Physicians also have greater choice in the real world. They only prescribe treatments they think will work, whereas in a RCT, you get the treatment you’ve been randomly assigned to.
These differences beg the question: just how different is the real world from an randomized controlled trial?
A group in Canada studied this question and presented their findings at the recent ASH (American Society of Hematology) meeting. The researchers looked at ~4,000 patients in Canada with multiple myeloma, a cancer with multiple treatment regimens that have been developed and approved, and used Canada’s national administrative database to track how they did after 7 different treatment regimes and compared it to published RCT results for each treatment.
The findings are eye-opening. While there is big variation from treatment to treatment, in general, real world effectivenesswas significantly worse, by a wide margin, than efficacy published in randomized controlled trial (see table below).
While the safety profiles (as measured by the rate of “adverse events”) seemed similar between real world and RCT, real world patients did, in aggregate, 44% worse on progression free survival and 75% worse on overall survival when compared with their RCT counterparts!
The only treatment where the real world did better than the RCT was in a study where it’s likely the trial volunteers were much sicker than on average. (Note: that one of seven treatment regimes went the other way but the aggregate still is 40%+ worse shows you that some of the comparisons were vastly worse)
The lesson here is not that we should stop doing or listening to randomized controlled trials. After all, this study shows that they were reasonably good at predicting safety, not to mention that they continue to be our only real tool for establishing whether a treatment has real clinical value prior to giving it to the general public.
But this study imparts two key lessons for healthcare:
Do not assume that the results you see in a clinical trial are what you will see in the real world. Different patient populations, resources, treatment adherence, and many other factors will impact what you see.
Especially for treatments we expect to use with many people, real world monitoring studies are valuable in helping to calibrate expectations and, potentially, identify patient populations where a treatment is better or worse suited.
We have a Nissan Ariya and currently DON’T have a home charger (yet — waiting on solar which is another boondoggle for another post). As we live in a town with abundant EVGo chargers (and the Ariya came with 1 yr of free EVGo charging), we thought we could manage.
When it works, its amazing. But it doesn’t … a frustrating proportion of the time. And, as a result, we’ve become oddly superstitious about which chargers we go to and when.
I’m glad the charging companies are aware and are trying to address the problem. As someone who’s had to ship and support product, I also recognize that creating charging infrastructure in all kinds of settings which need to handle all kinds of electric vehicles is not trivial.
But, it’s damn frustrating to not be able to count on these (rest assured, we will be installing our own home charger soon), so I do hope that future Federal monies will have strict uptime requirements and penalties. Absent this, vehicle electrification becomes incredibly difficult outside of the surburban homeowner market.
J.D. Power reported in August that 20 percent of all non-Tesla EV drivers in its most recent study said they visited a charger but did not charge their vehicle, whether because the charger was inoperable or because of long wait times to use it, up from 15 percent in the first quarter of 2021.
Fear of inadequate public charging has now overtaken “range anxiety” as the chief concern about EVs among the car-buying public, according to J.D. Power. “Although the majority of EV charging occurs at home” — about 80 percent of it, according to industry data — “public charging needs to provide a much better experience across the board, not just for the users of today, but also to alleviate the concerns of skeptical future customers,” said Brent Gruber, executive director of J.D. Power’s global automotive practice.
The collapse of China’s massive property bubble is under way and it is wreaking havoc as significant amounts of the debt raised by Chinese property builders is from offshore investors.
Because of (well-founded) concerns on how Chinese Mainland courts would treat foreign concerns, most of these agreements have historically been conducted under Hong Kong law. As a result, foreign creditors have (understandably) hauled their deadbeat Chinese property builder debtors to court there.
While the judgements (especially from Linda Chan, the subject of this Bloomberg article) are unsurprisingly against the Chinese property builders (who have been slow to release credible debt restructuring plans), the big question remains whether the Mainland Chinese government will actually enforce these rulings. It certainly would make life harder on (at least until recently very well-connected) Chinese property builders at a moment of weakness in the sector.
But, failure to do so would also hurt the Chinese government’s goal of encouraging more foreign investment: after all, why would you invest in a country where you can’t trust the legal paper?
Never before has there been such a wave of Chinese corporate defaults on bonds sold to foreign investors. And never in recent memory has a bankruptcy judge in Hong Kong, the de-facto home for such cases, earned a reputation for holding deadbeat companies to account quite like Chan.
Chan, 54, has displayed an unwavering determination to give creditors a fair shot at recouping as much of their money as they can. One morning in early May, she shocked the packed courtroom by suddenly ordering the liquidation of Jiayuan. She had peppered the company’s lawyers that day as they tried, unsuccessfully, to explain why they needed more time to iron out their debt restructuring proposal.
And then, late last month, Chan put lawyers for Evergrande, the most indebted developer of them all, on notice: Either turn over a concrete restructuring proposal in five weeks or face the same fate as Jiayuan.
It’s both unsurprising but also astonishing at the same time.
Amazon.com has grabbed the crown of biggest delivery business in the U.S., surpassing both UPS and FedEx in parcel volumes.
The Seattle e-commerce giant delivered more packages to U.S. homes in 2022 than UPS, after eclipsing FedEx in 2020, and it is on track to widen the gap this year, according to internal Amazon data and people familiar with the matter. The U.S. Postal Service is still the biggest parcel service by volume; it handles hundreds of millions of packages for all three companies.
Market phase transitions have a tendency to be incredibly disruptive to market participants. A company or market segment used to be the “alpha wolf” can suddenly find themselves an outsider in a short time. Look at how quickly Research in Motion (makers of the Blackberry) went from industry darling to laggard after Apple’s iPhone transformed the phone market.
Something similar is happening in the high performance computing (HPC) world (colloquially known as supercomputers). Built to do the highly complex calculations needed to simulate complex physical phenomena, HPC was, for years, the “Formula One” of the computing world. New memory, networking, and processor technologies oftentimes got their start in HPC, as it was the application that was most in need of pushing the edge (and had the cash to spend on exotic new hardware to do it).
The use of GPUs (graphical processing units) outside of games, for example, was a HPC calling card. NVIDIA’s CUDA framework which has helped give it such a lead in the AI semiconductor race was originally built to accelerate the types of computations that HPC could benefit from.
The success of Deep Learning as the chosen approach for AI benefited greatly from this initial work in HPC, as the math required to make deep learning worked was similar enough that existing GPUs and programming frameworks could be adapted. And, as a result, HPC benefited as well, as more interest and investment flowed into the space.
But, we’re now seeing a market transition. Unlike with HPC which performs mathematical operations requiring every last iota of precision on mostly dense matrices, AI inference works on sparse matrices and does not require much precision at all. This has resulted in a shift in industry away from software and hardware that works for both HPC and AI and towards the much larger AI market specifically.
The HPC community is used to being first, and we always considered ourselves as the F1 racing team of computing. We invent the turbochargers and fuel injection and the carbon fiber and then we put that into more general purpose vehicles, to use an analogy. I worry that the HPC community has sort of taken the backseat when it comes to AI and is not leading the charge. Like you, I’m seeing a lot of this AI stuff being led out of the hyperscalers and clouds. And we’ve got to find a way to take that back and carve our own use cases. There are a lot more HPC sites around the world than there are cloud sites, and we have got access to all a lot of data.
I’m over two months late to seeing this study, but a brilliant study design (use insurance data to measure rate of bodily injury and property damage) and strong, noteworthy conclusion (doesn’t matter how you cut it, Waymo’s autonomous vehicle service resulted in fewer injuries per mile and less property damage per mile than human drivers in the same area) make this worthwhile to return to! Short and sweet paper from researchers from Waymo, Swiss Re (the re-insurer), and Stanford that is well worth the 10 minute read!
When TO and RO datasets were combined, totaling 39,096,826 miles, there was a significant reduction in bodily injury claims frequency by 93% (0.08 vs 1.09 claims per million miles), TO+ROBI 95% CI [0.02, 0.22], Baseline 95% CI [1.08, 1.09]. Property damage claims frequency was significantly reduced by 93% (0.23 vs 3.17 claims per million miles), TO+ROPDL 95% CI [0.11, 0.44], Baseline 95% CI [3.16, 3.18].
My good friend Danny Goodman (and Co-Founder at Swarm Aero) recently wrote a great essay on how AI can help with America’s defense. He outlines 3 opportunities:
“Affordable mass”: Balancing/augmenting America’s historical strategy of pursuing only extremely expensive, long-lived “exquisite” assets (e.g. F-35’s, aircraft carriers) with autonomous and lower cost units which can safely increase sensor capability &, if it comes to it, serve as alternative targets to help safeguard human operators
Smarter war planning: Leveraging modeling & simulation to devise better tactics and strategies (think AlphaCraft on steroids)
Smarter procurement: Using AI to evaluate how programs and budget line items will actually impact America’s defensive capabilities to provide objectivity in budgeting
With the proper rules in place, AI is poised to be a transformative force that will strengthen America’s national defense. It will give our military new weapons systems and capabilities, smarter ways to plan for increasingly complex conflicts, and better ways to decide what to build and buy, and when. Along the way, it will help save both taxpayer dollars and, more importantly, lives.
As a parent myself, few things throw off my work day as much as a wrench in my childcare — like a kid being sick and needing to come home or a school/childcare center being closed for the day. The time required to change plans while balancing work, the desire to check-in on your child throughout the work day to make sure they’re doing okay… and this is as someone with a fair amount of work flexibility, a spouse who also has flexibility, and nearby family who can pitch in.
Childcare, while expensive, is a vital piece of the infrastructure that makes my and my spouse’s careers possible — and hence the (hopefully positive 😇) economic impact we have possible. It’s made me very sympathetic to the notion that we need to take childcare policy much more seriously — something that I think played out for millions of households when COVID disrupted schooling and childcare plans.
Census data suggest that, as things are, the child-care industry nationwide has been operating in the red for two straight years. Now, as programs still stressed by the pandemic lose a major source of public funds, many programs around the country are considering closure. When these businesses do shut down, they can send shock waves throughout their local economies. The shuttered child-care business sheds jobs; parents that relied on that business lose care arrangements for their kids, which in turn disrupts parents’ ability to work; and the employers of those parents must then scramble to adjust for lost workforce hours.
While each of those can feel like an individual misfortune, they are all part of a larger system of how our country cares for our young while adults work — or fails to do so. And the ripple effects can be enormous. Here’s one story of what happened downstream when a single day-care center in Wisconsin shut its doors.
Silicon nerd 🤓 that I am, I have gone through multiple cycles of excited-then-disappointed for Windows-on-ARM, especially considering the success of ChromeOS with ARM, the Apple M1/M2 (Apple’s own ARM silicon which now powers its laptops), and AWS Graviton (Amazon’s own ARM chip for its cloud computing services).
I may just be setting myself up for disappointment here but these (admittedly vendor-provided) specs for their new Snapdragon X (based on technology they acquired from Nuvia and are currently being sued for by ARM) look very impressive. Biased as they may be, the fact that these chips are performing in the same performance range as Intel/AMD/Apple’s silicon on single-threaded benchmarks (not to mention the multi-threaded applications which work well with the Snapdragon X’s 12 cores) hopefully bodes well for the state of CPU competition in the PC market!
Overall, Qualcomm’s early benchmark disclosure offers an interesting first look at what to expect from their forthcoming laptop SoC. While the competitive performance comparisons are poorly-timed given that next-generation hardware is just around the corner from most of Qualcomm’s rivals, the fact that we’re talking about the Snapdragon X Elite in the same breath as the M2 or Raptor Lake is a major achievement for Qualcomm. Coming from the lackluster Snapdragon 8cx SoCs, which simply couldn’t compete on performance, the Snapdragon X Elite is clearly going to be a big step up in virtually every way.
Qualcomm Snapdragon X Elite Performance Preview: A First Look at What’s to Come Ryan Smith | Anandtech
Gene editing makes possible new therapies and actual cures (not just treatments) that were previously not. But, one thing that doesn’t get discussed a great deal is how these new gene editing-based therapies throw the “take two and call me in the morning” model out the window.
referral by hematologist (not to mention insurance approval!)
collection of cells (probably via bone marrow extraction)
(partial) myeloablation of the patient
shipping the cells to a manufacturing facility
manufacturing facility applies gene editing on the cells
shipping of cells back
infusion of the gene edited cells to the patient (so they hopefully engraft back in their bone marrow)
Each step is complicated and has their own set of risks. And, while there are many economic aspects of this that are similar to more traditional drug regimens (high price points, deep biological understanding of disease, complicated manufacturing [esp for biologicals], medical / insurance outreach, patient education, etc.), gene editing-based therapies (which can also include CAR-T therapy) now require a level of ongoing operational complexity that the biotech/pharmaceutical industries will need to adapt to if we want to bring these therapies to more people.
To make and administer the therapy is laborious, first requiring a referral from a hematologist. If the patient is eligible, their cells are collected and shipped to a manufacturing facility where they’re genetically edited to express a form of an essential protein called hemoglobin.
The cells are then shipped back to a treatment facility that infuses them into the patient’s bone marrow. But to make sure there’s enough room for these new cells, patients first undergo myeloablation — a chemotherapy regimen that can be very difficult on their bodies and comes with the risk of infertility. Older patients may not be healthy enough to receive this treatment.
“This is an extensive and expensive process,” Arbuckle said.