I have been working with technology for over three decades now.

People sometimes ask me what has changed the most in that time. It’s not the speed of innovation — although that alone would be astonishing. Technology has moved from being a tool we use. It is now a context in which we live, think, decide, and relate to one another. What was peripheral has become foundational. What was specialized is now universal. Some of my banking colleagues thought it was strange. I said every business is a technology business 15 years ago. I believed it then and now no-one can challenge it.

And yet, technologies have matured in skill. Still, I increasingly feel that our collective understanding of what they mean has not kept pace. This is true for our lives, our societies, and our humanity…

That gap is not accidental. In many ways, it is the defining challenge of our time.

Not Just Smarter — But Wiser

When I started in technology, systems were scarce, slow, and demanding. You had to learn them deeply to use them at all. They were imperfect, but that imperfection forced awareness. You felt the edges. You felt the friction. You understood consequence because you lived it. It was tedious work and it required patience and persistence.

Today’s systems almost hide that consequence. They are fast, frictionless, and embedded in every domain of daily life. We expect them to just work. Because they do, we often forget to ask the deeper question: For what purpose? For whose wellbeing?

When technological ability outstrips our moral and institutional maturity, we end up with power without direction. Thinkers like Dario Amodei describe this as technology’s adolescence. It is a time of confidence without restraint and ability without wisdom. That concept resonated with me. It aligns with my global observations. It also matches what I have seen in the startups and founders I have worked with over the years.

But even as we talk about risk, there is another side to the story — one that is sometimes forgotten.

Consider Machines of Loving Grace, another essay by Amodei. There’s a framing there that I think deserves as much attention as the cautions. It asks not merely “What might go wrong?” but “What could go wonderfully right?” It presents examples where advanced technology, particularly AI, meaningfully enhances human wellbeing. This is clear in areas like healthcare, economic development, governance, and creativity. It is not Utopian — it is a disciplined optimism grounded in real potential. The point is not that technology will automatically deliver these outcomes. We have to imagine a future worth striving for. Otherwise, we will fail to build it.

In my work with founders and investors, I have seen up close how vision matters. A startup that simply optimizes for clicks or short-term growth seldom creates enduring value. A startup that grounds itself in human needs becomes something larger than itself. It alleviates pain, expands dignity, and enables opportunity. That’s a lesson that applies not just to individual companies but to the broader arc of technological progress.


Why Values Matter More Than Ever

We are often told that technology is neutral, that its impact depends on how it is used. But that is only partly true. Technology encodes choices. Every design decision, every improvement, every incentive structure embeds a set of values. And when these systems scale globally, those values scale with them.

This matters because today’s technology does not respect borders — cultural, political, or economic. It travels faster than any policy. It outpaces the committees and institutions that normally slow down change long enough for humans to understand it.

Yet human experience — our suffering, our joys, our aspirations — remains universal. Long before modern code and computation, great thinkers grappled with the core questions of human wellbeing. They sought ways to reduce suffering in the world.

One such framework — practical, clear, and deeply humane — comes from the teachings linked to Buddha. Not as religious doctrine, but as a disciplined inquiry into human intention, awareness, and consequence.

These teachings start with an observation that every human being can recognize: suffering exists. It has causes. Those causes are often related to want without understanding. They are also linked to action without awareness. In our age, where new technologies shape human attention and behavior at scale, this insight feels painfully relevant.

The Buddha could not have imagined machine learning systems or generative AI. But he understood something essential about cause and effect. Harm is not always the result of malice. It is often due to ignorance. That intention precedes action, and awareness determines outcome.

I keep coming back to this after all these years. Most technological failures are not technical failures at all. They are human ones.

Not a lack of intelligence. Not a lack of capital. Often not even a lack of good intention. What is missing is inner development—the capacity to hold power without being distorted by it.

Long before technology carried global consequence, the Buddha spoke about this problem in human terms. He articulated a set of human capabilities through the Pāramīs. These capabilities allow power to be exercised without causing harm. It was not a moral checklist. Reading them today, it is hard not to see them as a quiet instruction manual for a technological age.

Take generosity—not as charity, but as the refusal to hoard power, knowledge, or advantage. Modern technology naturally centralizes. Data concentrates. Platforms dominate. Without deliberate counterbalance, systems tilt toward extraction rather than service. Generosity, in this context, is the choice to design for access, participation, and shared advantage—especially when scale makes exclusion easy.

Or moral discipline—the idea that restraint is not weakness, but strength. In technology, speed is often celebrated as virtue. But experience teaches you that the most dangerous decisions are the ones made fastest. Discipline is crucial. It stops us from deploying systems simply because we can. It also forces us to consider who will be harmed long after launch celebrations fade.

Renunciation, too, feels unexpectedly modern. Not withdrawal from ambition, but freedom from the wrong incentives. After decades in this ecosystem, I have seen how easily metrics replace meaning. Renunciation embodies the courage to step away from growth that undermines trust. It means turning away from success that hollows out dignity. It also involves resisting incentives that quietly steer us away from human wellbeing.

Then there is wisdom—often confused with intelligence. Intelligence optimizes. Wisdom contextualizes. Intelligence asks how to build something efficiently; wisdom asks whether it should exist at all. Many of the most damaging systems I have encountered were not careless—they were brilliant. What they lacked was the patience to understand second- and third-order consequences, and the humility to accept uncertainty.

This work is not passive. It requires effort—the kind that is rarely celebrated. It is easier to ship quickly than to think deeply, easier to follow existing incentive structures than to challenge them. Aligning technology with human values demands sustained, often invisible work, especially when doing so is inconvenient or costly.

Patience matters here more than we like to admit. Urgency has a way of flattening judgment. Many harms arise not from malice, but from impatience—the belief that reflection can wait, that consequences can be fixed later. Patience creates space for dissent, for listening, for wisdom to catch up with skill.

Truthfulness becomes equally critical in a world saturated with narratives. Technology today is wrapped in stories of inevitability and disruption. Truthfulness cuts through that. It demands honesty about what systems actually do, not just what they promise, and clarity about trade-offs rather than slogans. Without truth, trust erodes—and without trust, even the most advanced technology fails society.

Resolve is what holds all of this together over time. Values are easy to articulate when conditions are favorable. They matter most when markets shift, competitors cut corners, and pressure to compromise grows. Staying aligned requires determination—not once, but repeatedly.

At the heart of it all is loving-kindness—not sentimentality, but the active intention that people be treated with dignity. Translated into technology, it means refusing to see humans as data points to be optimized or behaviors to be manipulated. It means remembering that systems ultimately exist to serve life, not consume it.

And finally, equanimity—the capacity to stay balanced in the presence of power. Technology confers influence quickly, and influence distorts perspective. Equanimity anchors leaders in humility, keeps learning possible, and prevents success from hardening into arrogance.

What the Buddha described was a developmental path—how humans grow into responsibility as their power increases. That is precisely the challenge we face now.

Our tools have matured. Our reach has expanded. The open question is whether we will develop the inner capacity needed to guide what we have built. Technology needs to include the above teachings. When we face AI, I believe the one that embodies the above principles will serve humanity or even guide humanity. That is the technology we need.

Because without that, progress becomes fragile. Humanity needs to embody the above principles well before technology can.
And without human maturity, technological power inevitably turns against the very people it was meant to serve.

This is a useful thought for anyone designing, funding, or governing powerful technologies today.

Intent, Awareness, and Consequence

In my own experience, I have seen how often technology is built in pursuit of metrics. These metrics often have little to do with human flourishing. They include engagement, growth, short-term monetization, and narratives of disruption. None of these are inherently wrong, but when they become the only drivers, they distort what technology ultimately delivers.

So I have come to ask. For every project, investment, and strategic decision: What is the real human impact we are pursuing? Why does it matter? And who benefits if it succeeds — and who bears the cost if it fails? More importantly, does this exercise leave the founders as better humans. The founders are the vortex of this evolution and they need to embody the above values.

These are not easy questions. They do not reduce to bullet points or power point decks. But they are the questions that separate technology that amplifies human dignity from technology that amplifies harm.

Growth With Meaning

We do not have to choose between progress and caution. We do not have to romanticize technology, nor do we have to demonize it. What we do need is a maturity of thought that matches the maturity of the systems we are building.

If technology is in its adolescence, then what we need now is conscious growth. This growth should embrace ambition with humility. It should combine ability with purpose. Finally, it must link innovation with responsibility. This is not just a theoretical proposition. It is a practical imperative.

And it begins with the way we talk about technology. Not as something distant and abstract, but as something woven into the fabric of human life and community. This is the voice I have tried to cultivate in the entrepreneurial ecosystem in Iceland. I have done this through platforms like Startup Iceland. It is not because it is fashionable. It is because innovation that does not deepen human wellbeing is innovation that will not endure.

The Question That Matters Now

After four decades of building, investing, and coaching founders, I have learned one thing clearly. The future is not decided by how advanced our technology becomes. Instead, it is decided by whether our humanity evolves fast enough to guide it.

The most important question we must ask ourselves is not Can we build this?
It is Should we?
And if the answer is yes, then how do we make sure it genuinely moves humanity ahead — together?

This is not a philosophical luxury. It is a practical necessity. Because technology will continue to evolve. But whether it elevates human life — that is still a choice we must make.


Discover more from Startup Iceland

Subscribe to get the latest posts sent to your email.