AI and Human Flourishing
The technological stagnation of the last decade is finally over. AI is here.
Your workflow will become more efficient. Video editing will be done in seconds. Cloud storage and GPU usage will be instantly optimized. Billions of hours of labor and billions of dollars of investment capital will be reallocated towards both more productive and more important tasks. As a result, AI will expand human flourishing.
But rather than be filled with excitement and optimism for a more fruitful future that will benefit billions of people around the world, many of the most prominent technologists, intellectuals, politicians, and investors are actively working to prevent this hopeful vision of the future from taking root and instead trying to force a narrative of caution and fear.
NGO's have already collectively raised billions of dollars under the guise of 'AI safety’, more colloquially referred to as ‘AI Doomerism’, which is apparently accomplished by imposing strict rules on what AI systems can do and who is allowed to develop them.
You may wonder why anyone would want to slow down technological innovation and limit productivity. Well, it's not because 'AI could become sentient' or that 'we could unintentionally build a paper clip maximizer'. You didn't actually fall for that Atlantic propaganda, did you? The real reason, unfortunately, is far more sinister and far more typical: control.
There is a larger agenda at play where innovation is seemingly only permitted to occur insofar as it advances the interests of incumbent global power brokers. There is a reason that the DoD, In-q-tel, and Big Tech are involved in almost every new technology venture. Consider this: Why exactly aren't we commissioning hundreds of new nuclear power plants? (Did you know that the US Navy has 83 nuclear-powered ships?) Why aren’t we terraforming our deserts? Why aren't we building high-speed rail lines? We already have the technology for all of this. In fact, we've had it for decades. We even have founders and investors who want to build all of these things. The answer; however, is simple: Innovation threatens the central planners control. They don't want you to have cheap energy, new frontiers, or unfiltered access to compute because they want to exploit your dependency on their centralized systems in order to control you.
Ruling elites and bureaucratic institutions inherently fear innovation because new technologies, by their very nature, could be leveraged to disrupt the status quo, from which they have a vested interest to maintain. In this context, AI safety is best understood as a multilayered scam where technology leaders, corporations, and venture capitalists collude with government actors to raise barriers to entry and limit potentially disruptive innovation. (At least, until it can be harnessed to expand their influence.)
Allow me to let you in on a secret: Some of the very people who were early investors in AI companies are the exact same people lobbying the government for 'AI safety'. Yes, you read that correctly. What they really want are operating licenses, data exclusivity agreements, and government regulations that will solidify their position as market leaders and make it all but impossible for others to compete.
In case you haven't realized it yet, we are no longer operating within a laissez-fare free-market system or anything that even remotely resembles it. Rather than allow private enterprises to compete with each other on the open market, we find ourselves twisted in a web of propaganda spun up by billion dollar marketing campaigns intentionally designed to instill fear and normalize the idea of government intervention. In other words, the state and more specifically, those who control it, will pick the winners.
Apart from the regulatory vector, there is another that is equally as concerning: the ideological vector. As was demonstrated clearly by the launch of Google's Gemini AI, the architects of these systems have every intention to embed their ideology and philosophical presuppositions into their products. Technology companies are at a point of ideological capture where they pay entire teams of people to tamper with their own proprietary systems with the explicit intention of producing more inaccurate output. They are actually working in order to constrain innovation and obscure reality. The publicly-stated rationale for this, presumably, is that 'the truth is dangerous’ or some variation of this logic. Which begs the question, 'to whom exactly? '.
In order to realize a future of human flourishing, we must break through the shackles being chained to our technology. They say that politics are downstream of culture, but the truth is that both politics and culture are downstream of technology. Of course, there will be negative externalities associated with the development of any new technology. Many will surrender agency for convenience but those of us driven to create will be provided with an incredible opportunity.
There is no reason to think that AI cannot be compatible with human flourishing. In fact, we have every reason to think that it will enhance human flourishing. Consider the billions of hours wasted in near-mindless tasks like video editing, customer service, due diligence, financial accounting, and data analytics. By increasing productivity and allowing for a more efficient reallocation of labor and capital, a larger percentage of resources will be freed up to flow into product design and strategy— where user experience, aesthetics, and corporate worldview will offer opportunity for differentiation.
In the race to differentiate from each other, technology companies are set to become lifestyle brands. As a result, the philosophers who are capable of imbuing these companies with a vision and a soul will become the most valuable commodity within the technology industry, even more than the GPU's.
Don't miss the forest for the trees— the argument for the benefits downstream of AI is more than just 'GDP nUmBeR gO uP'. It is clearly established that increases in GDP do not necessarily lead to better outcomes, especially in instances of regulatory capture and other market failures. This is precisely why it is so important to resist government regulation and other external control mechanisms.
Sure, there will be bad systems built by bad people with bad values and bad intentions, but the optimal outcome is that a wide variety of systems are built in a hyper-competitive market and we, as sovereign individual consumers, have the freedom to voluntarily choose which we interact with and at what level of immersion. The real opportunity with AI is to leverage large-language models to create competing realities that the public has the freedom to join. If the nation state is going down, this is the best path forward.
Can AI lower the barriers to entry of sovereignty? Can AI, in combination with drones and weapon systems, guarantee the defense of charter cities? Can we have a world with thousands of different AI's, each imbued with the particular values of their architects?
The people who design the best AI systems will understand the interconnectedness of the natural world. They will be philosophers at heart. They will understand that there is no free lunch— that there are positive and negative externalities to everything. They will understand that there are no solutions, only tradeoffs. They will have faith.
Forget optimizing for GDP, key performance indicators, or the utilitarian 'solution' du jour. Those who prioritize reductive datapoints will create digital hellscapes that no user actually wants to participate in. They will build the slave colonies, but it will be your responsibility to not subscribe.
The winners of the AI race will be those who create systems that don't fight the natural order, but instead, mimic it. They will not try to reengineer nature or humanity, but hold them in reverence, and learn from them. They will be built with the understanding (or at least, allowed to learn) that we, as humans, occupy a specific niche within nature, and consequently, are an intrinsic part of the natural systems we exist within.
Perhaps most importantly, the best AI will sell the dream not of a return, but of a renaissance. There is no going back.