Posts

How I learned to Innovate (a bit more) in Regulated Industries

Image
Here's a short story about how I learned to innovate (a bit more) in regulated industries. In the summer of 2020, Yves Prevoo had got easee in the first Techleap batch. Pretty cool! I was invited to join one of the sessions for the CTOs in which Ali Niknam talked about his adventures. I didn't know anything about him then but I liked him right away as he also wore a black t-shirt and a Casio F91w, my standard outfit. I had been struggling with innovating in a highly regulated industry. I dreaded the yearly ISO 13485 audits and always had the feeling we were doing everything wrong or breaking some law that we didn't know about yet. It didn't help we didn't have anyone with Medical Device experience and it didn't help that we had a very strict auditor. To be frank, I really had no idea how to develop anything new and get it certified without going bankrupt from the clinical trials. I didn't know what to do or how to lead my team in an inspiring way because of

One year of Freelancing as Fractional CTO

I'm taking some time to recharge after struggling as a startup CTO for 5 years (with lots of ups and down). I have been working 40 hours a week in salaried positions from 2010 till 2023, and I decided to take it a bit easy. And with easy I don't mean to not work hard or to do simple work, but to not be 100% committed to a company for the long term. In June 2023 I started freelancing as a Fractional CTO. It's for complicated projects or for companies that don't need a full-time CTO yet. It's been really great. In the past year I've: Posted my availability once in my "goodbye to easee" LinkedIn post. Been hired by 5 companies, who all found me through my own network. Networked more than ever before. I suddenly have time for random meetings throughout the day. Have not started an LLC (BV), nor got insurance, nor opened a separate business bank account. Have signed only 1 contract for my freelance work (but some more NDAs) and almost never committed to a c

Struggle as a Service

For most of us, life is the easiest and most comfortable it has ever been in all of history. We have better healthcare than royalty had 100 years ago, we fly around the world for fun and all the information and entertainment we could ever want is at our fingertips. In most civilized countries there's even decent welfare systems for the have-nots. Although social media and climate change are giving us major anxiety, it sure beats hunger, plagues, world wars or nuclear gloom. Humans have evolved with constant discomfort, and now we're comfortable. This is a reason for boredom and restlessness in the modern world. We want to be challenged and experience risks. Instead of life on easy mode we need struggle. Perhaps this explains why seemingly normal people that have lots of other options start companies that do not make a lot of financial sense. There's a need to prove oneself, to work really hard, and to experience risks. Startups definitely let you do that. Whenever I see Saa

The long long tail of AI applications

Image
Last year multiple companies asked me for advice. " We are evaluating this AI powered product, but do you think it makes sense at all? It seems a bit niche and we think ChatGPT might make this entire thing obsolete soon. " Long tail - credit to Wikipedia My answer so far has always been: "No, bare LLMs are not going to compete with this product." I think that people are failing to understand the distinction between different classes of AI companies. I see it like this: Foundational AI - creating models like GPT-4 (text), Sora (video) etc. Applied AI - using existing models to create smart applications. Note that there are also companies not focusing on AI - they will start lagging behind, let's forget about them ;) There are orders of magnitude  (!)   more companies that will deal with Applied AI than Foundational AI, and they will be very busy for decades to come. Here's why: To get the most out of LLMs, you have to ask the right questions. LLMs have acc

Follow-Up

Years ago I was a Product Manager of a product with lots of teething troubles. The Director of Support and I sat down from time to time to discuss the most urgent customer issues. One day he suddenly said, visibly annoyed, "I see that you're agreeing with me and writing down things in your notebook, but are you actually going to do something with it this time?" . Needless to say, that hit home. I've seen many LinkedIn posts that go like "here are ten things that take 0 talent: following up on commitments, being on time, ... " etc. . But I think that's wrong. Making sure you "follow up" is hard work. I'm a total pleaser by nature, I want to be agreeable and I dread confrontation. Telling someone "I'm not going to do anything with this" is very difficult. So my natural tendency is to say "Hm yes that's a good point, it would be great if we did something with it, perhaps we can do ..." and then go off and brainstorm

AI programming tools should be added to the Joel Test

Here's a wake-up call to all CTOs: AI programming tools are getting freaking amazing and if you don't allow your teams to use them somehow, it will bite you in the ass in a couple of years. You will be slower and you will lose your best people. The infamous Joel Test is a list from the year 2000 of 12 things all great software companies do. Since then most companies have implemented Git and CI/CD, checking of three items, so we have some space left in the 2024 update ;) I believe "Do you allow your developers to use AI assisted development environments?" is a necessary addition. I get that you don't want your source code to end up on some OpenAI / Microsoft / Github server somewhere, sure, but find a way to use your own models or learn to live with it. Note that this is often not the same as "#9 - Do you use the best tools money can buy?" as blocking AI tools is about data security, not money. So why do I think developers need AI programming tools? I&#

Rhyme is a parity bit

Weird thought of the day. Speech patterns that rhyme are pleasing and resonate with our brain. Could there be an evolutionary benefit to it? Cultures without writing systems told stories to pass knowledge from one generation to the next. As we know from the Chinese whispers game, a lot of information is lost that way. During oral story telling, information is added, removed or changed, it is wildly unreliable. In computer systems we add checksums or parity bits to lossy mediums to ensure reliable transmission. In way, the constraint of rhyming words can be seen as a parity bit on sentences. The amount of words that can be placed in a rhyming sentence is significantly lower than in an unconstrained one. Perhaps humans got better at reliable data transmission when they evolved to appreciate rhyme.