top of page
Search

Life in the Middle and After AGI

  • Writer: Ethan Smith
    Ethan Smith
  • Apr 16
  • 34 min read

Updated: May 18

We are racing toward an uncertain future that draws nearer every day. We have a rough idea of what the future could look like. I'm inclined to think some of our guesses and fictional depictions may hold some weight. Historically, we've been good at that. Possibly now more than ever, we understand ourselves, as humans, to know the things we will seek and the trend we are on. Meanwhile, in our sciences, theories and understandings have outpaced implementation, extending our vision of what's possible before we get there.


Despite being able to reason about where things may go, I feel like I haven't really taken to heart yet how much life will change. I don't think I'm alone in that.


We can speak logically about living among robots or using VR headsets daily, but I doubt we'll know how that feels until we get there. Inevitably, as our emotions influence our thoughts and decisions, it will affect how we consider the future and the urgency with which we treat it.


When I say urgency, I'm not necessarily speaking to countermeasures for a dangerous technology. Instead, I think a lot of industry sectors—schooling systems, government, people, relationships, and more—are not prepared to adapt and cope with the changes. I believe we have prioritized short-term band-aid solutions when, in reality, our developments necessitate a radical rewrite of the foundations of everything we know.


In the near term, AGI (Artificial General Intelligence) is one of the prime candidates to cause a substantial societal metamorphosis. AI in its current state has already begun to take on its first jobs, become romantic partners, act as therapists, and much more.


In this post, I want to talk about what lies ahead for society in assimilating into this new meta, as well as what it means for ourselves at the individual level in the middle and post-development of AGI and superintelligence.


Overall, I foresee a rude awakening and an awkward transition period, particularly for societies that have long been founded upon human merit to designate class systems, and depending on how we fare this period, it may be followed by either a utopic or dystopic future.



AI and Human Merit

"AI will come for our jobs." Correct. It most certainly will. Automation is an unavoidable fact of human development. But cards played right— removing the burden of work from humans could be something to look forward to.


First goes the little things, menial jobs that find easy automation, like call centers, scribes, and customer support services. Even if not eradicated completely, AI can at least handle a significant portion of the workload. While the loss of these jobs prompts us to find new ways to ensure those who previously held those positions can still make a living, I would argue this space is a refreshing place to see tedious logistics automated away, both for the consumer and the performer. Sectors with lightweight tasks that depend on the digital realm are currently most vulnerable, due to AI not yet having a strong presence in the material world.


It's possible that areas such as data science and accounting may also face risks, but generally, humans still play a significant role in overall analysis, intuition/judgment, collaborating with others, and managing client relationships. Presently, these are places at least where AI collaboration makes a lot of sense, though for fluid collaboration, many might prefer to be talking to a human.

Driving is at risk. Self-driving cars, I reckon, are just years away considering the progress made by Waymo and other companies. Currently, putting your life in the hands of a robot car may be difficult to get accustomed to, but in time, it may feel like the safer choice. Especially in a world where many may prefer privacy, this could become a preferable option.


Later on, more elaborate digital jobs will be at risk, like software engineering. Already, the story of product managers vibe coding solutions is coming true, though you still very much need a good bit of engineering to ensure scalable and secure code deployed to millions of users.


Artistic crafts are a strange one. It's still far from AI-complete, and the metric for when it may be considered complete is subjective. We're at a point where it is often difficult or nearly impossible to tell AI creations from human ones. It's even stranger to think that this was one of the first places for AI to become quite strong and widely deployed. Our prior notions of difficulty, value, and creativity might lead us to expect that art would emerge at a later stage in AI development. Though perhaps this is something that just fit given that images are a staple of data we have created and shuffling around pixels works nicely in the digital world. Many will assert that there is a level of complexity that AI is yet to reach, and ultimately it can't deliver on soul. I believe this when seeing the majority of slop but question it occasionally when a generated image has a truly surprising composition and cleverness to it. The ability to tell a story through an image is a challenge, and not one that is limited to AI systems. Things will get weird once the final challenges for AI start to mirror those of humans in rarity and difficulty.


Once robotics improves, I imagine physical labor will be a prime target. Robots, despite incredible progress, are yet to match the full flexibility and generalizability of human workers. Though there's enough of a glimmer to recognize that it's not a pipe dream. Factory workers, construction, sewage management, janitors, and more are all at risk. While robots may struggle in out-of-distribution scenarios and other edge cases, these tasks are desirable for automation due to their cost-effectiveness, precision, strength, and durability.


The last to go will likely be jobs centered around the service industry. As is the subject of this post, I think it could take some time for people to tolerate robot waiters, nurses, doctors, salesmen, et cetera. For some, this could be a welcome change, or at least indifferent. Personally, I feel a bit uneasy asking people to do things for me. Though for others, especially in medical professions, there is an element to caretaking orchestrated by a person in the flesh that will be challenging to match.


However, I don't think there's anything that's off the table entirely for AI replacement. In due time, it will make sense to automate just about everything that creates commercial value. Chances are, it will be cheaper, faster, and more masterful compared to anything humans could come up with. Regardless of whether an economy is under a capitalist design or another, the question arises with gains in AI: "Why would we settle for less?" Unless jobs are still playing a role in giving people purpose or as a way to distribute wealth, it just won't make sense to have human workers. The competitiveness of the job market and the need for talent allude to even top performers being outcompeted. Given enough time, it will exceed the greatest engineers, CEOs, and scientists. There is a possible exception here that will be discussed later, though for the most part this is a very reasonable future to imagine. Technology doesn't move backwards, and plateaus are temporary; the only way through is continued improvement.


To me, this is a pretty world-shattering thought, and it's coming quickly. Human life has been built around work for tens of thousands of years. It has given purpose, enforced order, defined social stratification, and been a means of socialization within a community. It confronts a very fundamental aspect of human nature, that is, to create and contribute alongside others.


The near term, in the middle of AGI

The near term is a point for the greatest culture shock we may experience in our lifetimes. It's already happening, and I think our response is frequently an unproductive fight against the grain.


Tech job interviews are still doing LeetCode, doubling down on surveillance on whether AI tools are used when really, they are becoming part of the job, and I strongly hold the belief that those using them correctly will surpass those who don't.


Schools are facing a crisis on detecting academic integrity. Many are using GPT to complete assignments and essays, and schools are responding by using third-party text checkers to determine if it was written by AI. Due to the subtlety of AI-written text, this often leads to false accusations and offers only a tenuous band-aid solution to a growing problem.


On one hand, this is all understandable; performant AI tools have only hit the masses in the past year or so. The logical gut reaction is to build defenses around your current practices. After all, these practices have been effective for a long time and have become deeply embedded in the systems themselves. Though this is only buying time before needing to address how the role of the human will evolve in these times. I'd argue we've been slow with giving an answer to this question since the inception of the internet. This bought time should be utilized as a temporary stopgap to plan for the future quickly.


All of this poses a threat to our merit system, where grades and acceptance rates typically serve as measurements of knowledge, or in the worst case, memorization of scattered fundamentals. This model has long been flawed. While such metrics can showcase work ethic in a scholarly setting, they are only a proxy for the ability we want to capture. We can only hope that it transposes to real-world applications.

While formulas may be good to know, they are more accessible than ever by the internet. Many of the tests of fetched knowledge have been made trivial. What becomes the more important differentiating factor is knowing where and when to apply your learnings, ideally in novel, unseen scenarios, which I think is an invaluable intuition that has led to much of our innovation today. As the calculator changed the way we do math, allowing us to focus on abstracted forms of our problems by not worrying what the formula for sin is, AI, too, will allow us to abstract further and focus on the "why." Instead, we should be looking into how to adapt ourselves to teaching and employing humans using AI instead of attempting to make it vanish from existence.


Schooling primarily employs a bottom-up approach, rapidly sifting through fundamentals and potentially presenting some pseudo-real-world scenarios for application. But I think a failure mode is learning to be a good test-taker, only studying what you need to pass, and forgetting it after. It is, admittedly, a challenging thing to move from, both because of the energy required to make the change itself in a system that has no time for breaks and also because it asks us to somehow figure out a means to evaluate more holistically and abstractly while still aiming for a quantitative, objective, and fair metric.


Fairness is essential, and the quantitative or objective aspects of measurements of performance are considered prerequisites to fairness. Even so, are grades or performance reports even fair now? This was something I quickly realized in my college years: that your final grade is sometimes a greater reflection of your professor's demeanor than it is of your understanding of the material, and student rumors about easy and hard teachers are attempts to hack around this. Taking the teaching of the material out of the equation, this phenomenon is reflected in both individual test design and interpretation of more subjective questions like essay answers. I would argue every metric humans have ever designed that cannot be firmly grounded in reality, like mass or velocity, is wrong. We have no way to even know if our metric is accurate beyond its perceived utility, which, unfortunately, can also be a subjective metric. Additionally, a barrier to changing our evaluation systems is that it is challenging to establish counterfactual utilities; that is, if we used a different approach, what would our world look like?


If we can accept that our current quantitative metrics are unsatisfactory and that this may be an intractable ideal regardless, we can get more creative in how we evaluate. After all, colleges and job interviews are increasingly looking for experience and the things you did rather than numbers. With that said, I believe heavily in a top-down approach to education. Here is something we'd like to solve; perhaps you know nothing about it. What can you do to get there? Ask AI for a background on the problem or some sources that may acquaint you with the material. Build intuition as to what the problem asks of you, and get to tinkering on it. This was an approach I had only discovered in my late college years when I wanted to do research in AI and possibly the first time I fell in love with learning. More than ever, the resources are accessible and democratized enough to make such an endeavor possible. It is not unthinkable to imagine a very free approach to designing personalized school curriculums driven by the student's curiosities and progress while still having a diverse liberal arts education.


I think this is a brief sweet spot of time to cherish where we have the opportunity to collaborate with AI tooling and still be able to bring something to the table, allowing it to take care of the tedious parts while learning the higher-level judgment and direction where it may fall short, and use that to take on problems that would normally be outside of our grasp. As opposed to just matching the calculator's prowess, we can go so far beyond. When we have access to abundant intelligence, our ability to act independently, make meaningful choices, and drive impact becomes the key differentiator. The real challenge isn’t just abundance of intelligence; it’s facilitating that intelligence into action. As detailed here, I think freeing up cognitive bandwidth on menial tasks allows for small teams to do powerful work, and perhaps instead of needing a company-sized superorganism of people to make things happen at scale, many smaller teams exploring different directions while circumventing bureaucratic friction could be feasible.


Just as much as this is a call to reform work and education systems of the many, it also draws into question the justification of why those at the top are present there anyway, which I think is another obstacle to change. I see this as a failure mode of capitalism. Capitalism, through its invisible hand, intends to drive innovation through competition and provide benefits to all its constituents. It deviates from this goal once incumbents inherit the power to suppress progressive technologies, such as when oil companies engage in anti-nuclear lobbying. It's a poor minimum, and there's weak incentive to move from a system that benefits you.



The Long Term, after AGI and ASI

In a world where AI solves everything, what is left for us to do? Not that this signals an end of progress for humanity, but our involvement may increasingly become more remote. Once the last job is taken by AI, it would seem the only fair governing structure would be something I would call technological communism, where a merit-based structure becomes impossible to deploy due to the superiority of AI effectively rendering every human equally inferior in capabilities.


Are we capable of being content without a mission in our hands? How deeply ingrained is the human tendency to seek power and climb social hierarchies? As it stands, this is something I ask frequently, as our primary objective, survival, has become increasingly abstracted into a 9-5 salary to purchase living, as opposed to feeling the first-hand grit of fighting to stay alive, which, too, may have interesting psychological side effects.


Maybe then, life becomes something of heaven. There is no scarcity, and every human can have any experience they want, when they want. Why does something still feel missing here? What could possibly be missing from everything?


For starters, it would seem we almost need some bad to appreciate the good. We are constantly regressing toward homeostasis, a steady state, but not equilibrium. To equilibrate entirely is to become inert. To be unable to deviate up or down is akin to the heat death of the universe. Imbalances are what allow things to happen. Similarly, with constant surpluses, it is not unreasonable to imagine becoming increasingly jaded or desensitized. Having everything becomes the new norm. This is possibly part of the reason why the rich and famous sometimes seek out unusual or unethical pleasures.

Though, it would seem the "bad" is something that needs to come about incidentally as opposed to manufactured. It is something we push through and come out on the other side, feeling rewarded for our efforts. The delicate equilibrium between positive and negative aspects breathes life into life. So far, life has changed, but not as radically as it could down the line. Despite improvements in accessibility to health, wellbeing, socializing, and more, present life still has a good bit of ups and downs. Though it's clear the bad is losing the fight. We aim to eradicate bad and maximize good by solving humanity's problems through chemical governance, like antidepressants, and providing non-stop entertainment. Who can blame us? This is a fight we were designed for. We're just getting ahead of the bad. That said, how can we replicate the ebb and flow of life without it occurring naturally? This is mostly a rhetorical question. Personally, I have no idea. Though I keep thinking about the ending scene of Pantheon where

or, a bit more provocatively, what lies in human nature around the enjoyments of pain. Perhaps it is a self-balancing mechanism, achieves desired heightened feelings, even if considered negative ones, or fulfills some sort of purpose that makes it worthwhile.


It's hard to imagine what life is like in a non-stop infinite paradise, but without a need to deliver on merit, I hope that we can create just for the sake of creating and upskilling ourselves rather than needing to compare to something else. I hope that we can paint, play sports, travel, consume, and do all of that freely. Though as someone who has always been very goal-oriented, I imagine I'd find it difficult to not have problems to solve.




Dealing with the AI among us


The near term, in the middle of AGI

For a while, AI generated content may have only fooled older generations, but increasingly it is becoming indistinguishable from reality for any viewer. There was once a time everything we saw was human-made, but now it is always under question. AI generated text isn't so bad, given how trivially anyone can produce deceptive text themselves. However, AI images and videos pose a larger risk of societal psychosis, both for legal and psychological reasons, as well as skewing our perception of value.


Societal Psychosis

For now, it's primarily the Facebook shitposts and boomer-bait, which, to be fair, lots of videos online are faked for views. Though it starts to get a bit more unnerving when the ability to fabricate fake news is faster, higher quality, and more accessible than ever. While human catfish exist here and there, it becomes plausible for AI catfish to be deployed effortlessly in masses to saturate an entire platform, to the point where we may not be able to tell if a person is real or not, because everything about their mannerisms, pictures, and persona blends in to appear like any other human.


This is my common contention with the "But people can use Photoshop!" argument. Yeah, sure. But now everyone has the capability to generate fake content, blazingly fast, and arguably more convincingly than prior tools. The intersection between bad actors and those with the capabilities to manufacture deceptive images grew a lot.


It is a bit unsettling to see the amount of comments from people on a post, or even I myself, failing to recognize an AI image or thinking a real image is AI.


The jokes about being in court being persecuted for a crime one never did based on AI generated evidence aren't unthinkable to imagine. The stock market made a sizable move due to a fabricated image of the Pentagon on fire. Heavenbanning is a hypothetical phenomenon where instead of directly banning someone from a social media site, you replace all of their interactions with other real people with AI personas that are overly agreeable, keeping you as a user of their service and in a data-generating prison.


Really, these times sparks a war on truth, something we take for granted. Many industries and facets of life have depended on images and video as canonical sources of reality. It's clear now this was was a short-lived privilege since the inception of these mediums. Attempts to watermark are a lost cause given that models can be deployed locally and the watermarks are trivially tampered with. A logical way forward is for judicial systems to be cautious where they source evidence from and if any kind of metadata can indicate tampering. While security cameras are probably fine, personal surveillance systems like dash cameras may be under question. Even so, with security cameras, this is moving trust more towards authority systems and taking it away from common citizens.

Aside from what this means for law, we move towards a world where we almost get to pick and choose what we believe in. Things that do not fit in our agenda can be dismissed as fake, and those that do can be considered real. Reality, suddenly, is optional and open to a wide space of interpretations, hence the use of the word "psychosis." More subtly, while our concept of reality is already tainted by caricatures and exaggerations on social media, this only offers more fuel to throw on the fire. Generated experiences in VR are also coming closer to rivaling reality, where it can almost look less like a temporary escape and more like a complete chosen universe to live in, like in Ready Player One.


The other related point in this section is how we will come to see AI living among us. Presently this is not a huge thought due to lack of a material form, perceived autonomy, and human convincingness.


Though already people are using AI for social reasons that we would normally pursue others for. CharacterAI is a controversial company serving chatbots which act as companions, lovers, and therapists, to name a few. I'm not certain what their original goal was, though it has resulted in a large population of minors with seemingly unhealthy relationships with chatbots (even leading to suicide) from skimming their subreddit and reactions to brief site outages, and the company has done little to avert this because, well, it's their whole revenue.


I have previously gotten flak for voicing my concern here, opponents citing that it is the parents' fault for allowing their children to be unattended. That's a valid factor, sure. Though the website is very much targeted towards a young audience, an audience that pursues talking about difficult subjects that AI presently may not be fit to advise on. You, as a parent, probably aren't considering that your child is talking to SpongeBob online about self-harm and parents want to give their children the proper amount of space. A comparison is often drawn to video games and that it's not the fault of the company if someone develops an unhealthy addiction. Though this time around, there is something distinctly parasocial about a service that is primarily used for emotional connection with digital beings whose behavior is controlled by the company and people readily divulge sensitive matters to. It doesn't feel like a huge ask for providers to either moderate such content or signal for help if danger is suspected. Otherwise we are left with a vacuum of responsibility between the two entities that are aware of the conversations, and neither child nor machine (currently) can really be held responsible. Therapists are required to report suicidal tendencies if they suspect as such. Some kind of similar accountability is needed. I recognize it's tricky, as users are entitled to privacy, but also the bots are not meant to be used as licensed expert therapists capable of deescalating situations like this. It either needs to be controlled in some manner or not deployed at all.

And once again, it's funny how AI has some of the most significant use cases in some of the most human-feeling things, like art and companionship. Here, possibly highlighting that if it is easy enough to provide something that feels like authentic companionship from an inanimate stream of universally agreeable text that only touches upon the breadth of human nature, then perhaps it shows how deprived of human connection many currently are and the loneliness they face.


This is part of that awkward transition period. I don't think AI-human relationships are inherently bad, but they're presently parasocial with the deployers of the chatbots and seem to have an overall bad effect.


The fact is, though, this trend will only continue, and AI will increasingly take on more human characteristics until it blends in entirely. We will be faced with figuring out how we conceptualize AI living among us.



Perception of Value

Aside from how AI content may distort our understanding of truth, our perception of value is also at risk. More specifically, the way we assign worth and meaning to experiences, art, writing, music, and more, as well as their ability to move us, is under threat. Work that was once extraordinary might become extremely commonplace and kitsch. Instead of continued fascination, we may find ourselves increasingly jaded by a constant flow of content, dopamine receptors burnt out.


There are a few common heuristics we use for assessing value and triggering a sense of awe.

  • Rareness

  • Blood sweat and tears

  • Perceived skill

  • What other people think of it 


Rareness is how a diamond has its value. If diamonds grew on trees, they likely wouldn’t have the acclaim and usage in luxury goods that they do now. When we come across an outlier among a sea of content, it is properly surprising. It is more likely to move us in some capacity whether that be joy or sadness. Finding a standout piece of music inspires you to seek out more of their work, perhaps just enough discography to go through and remain excited. Though now a fine-tuned AI model could be used to mass produce the traits that define one’s artistry. The tactical cliffhanger fails and the magic behind the curtain is lost as the work succumbs to sitting among millions just like it. I won’t say this is guaranteed. There still is often an appeal for reals and vintage similar to how people appreciate records, physical paintings, or NFTs as opposed to screenshots. That said, I can see how we can overdose on novelty too quickly, killing what makes the work special in the first place. Anecdotally, this generation seems profoundly nonchalant and serious about little, which I think is the product of a constant stream of content encompassing the breadth of human existence at their fingertips. Naturally, things would become dull if they’re everywhere.


Another facet of value is the effort that goes into work. There is a common conviction that for something to have value, it requires that the person who built it put a significant amount of time, heart, and work into it. This is a heuristic rooted in empathy. Through experiencing the output creation, you imagine their experience and struggle in creating it. I think this is a fine heuristic, and it is an interesting way by which we share experiences. However, I’ve observed a strange dissonance when people appreciate a piece of art, feeling the soul behind it, but then discover it was made by AI and immediately disown their appreciation for it. Can you take a mulligan on feeling? Can you undo an emotion? Should the circumstances by which something moved you matter? Personally, I have often lived by "separate art from artist": a saying often said when someone may appreciate a piece of art from someone morally reprehensible. I think the same can apply here. It seems like an unsustainable cope to reject the feelings that arise from AI creations in a time where they will only become more pervasive and convincing.


Similar to effort, we also reward outstanding feats of skill. We assign value when a given work appears like only a handful of people in the world could have pulled it off. This strongly correlates with rareness and the effort that goes into building such a skill. 


Lastly, we also may conclude value based on what other people think. Try as we might to be our unique selves, we all have even a smidge of “follower” in us. We may learn to appreciate something if someone very important in our life also appreciates it. From an economic perspective, an artist’s work is valued at what the top bidder is willing to pay. Why is this the case? If it is discovered a wealthy patron is willing to pay up to 500k for a piece of art, this suddenly turns heads. Scalpers would love to get it at any price below that to re-sell it to this patron, so they will fight for it Additionally, this may also increase the value of other works by the artist in hopes of finding a similar kind of payout. Before we know it, an artist who was originally a nobody has risen to fame and is selling their work for millions. It is a self-reinforcing loop, and incentivizes artists to advertise that their art has sold for a high price.


Aside from that which humans create, the other place our sense of value may be affected is our perception of the qualities in each other. 


With technology that lets us edit both our genotype and phenotype—our genes at birth and our human form, respectively— every person can suddenly be whatever they want. If we’d like to be kinder, taller, free from disability, more confident, or more outgoing, there will likely be a technology that lets us reprogram ourselves for this. As it stands, antidepressants are noted to induce personality changes that are hard to say if they are the product of lifting the fog overcast one’s true personality or if it actually modifies one’s personality. Plastic surgery can allow us to have an appearance beyond what was given to us. 


All of this is partaking in eugenics. Some of it feels exclusively good, like eradicating a disease that would cause an untimely death or lifelong suffering. However, it might become more questionable when used to curate certain personality traits or physical abilities. It’s a strange dilemma because I don’t think it will be obvious where to draw the line. Depression may exist as a chemical imbalance, or it could be born from a dissatisfaction with the self. Thus we may be incentivized to curate for personality traits that have a low risk of depression and suffering, still arguing that we are motivated by reducing disease. But there’s a slippery slope here; suddenly everything deviant from a model of perfect health can be seen as disease. Who gets to call the shots on the image we build humans into? If this reminds you of a certain guy from the 1940s who imagined an Aryan race, then you’re spot on. 


Aside from the ethical concerns, it’s really strange to imagine a world where everyone is the same, and maybe a bit sad. One beauty of humanity is that role models of success are massively multimodal. There are many possible “best” yous that you can aspire to be. I would worry that eugenic interventions would increasingly collapse diversity in favor of a single ideal. Given that the same kinds of plastic surgeries are commonly done, I would say there are already singular or few conventions we would aim for if given the chance. At some point in many lives, people may have wished to be more like those within an in-group. It is really strange to imagine what a world would be like without the natural variations that make individuals unique. Whereas once, living a life that goes against your natural inclinations may prompt a period of self-discovery, now you could suddenly convert yourself into anything for anyone. What kind of life is it where every aspect of yourself is now a manual choice? What are the implications of everyone converging towards sameness?


It could be a good thing in some ways: it’s one route of possibly reducing conflict within a group and ideally most of the interventions do in fact reduce suffering. It’s strange too as many people now may already feel inadequate because of how they from the norm instead of appreciating themselves for it. On one hand, we can continue to push the idea that everyone should feel good about their differences. On the other hand, another solution is to avoid anyone ever feeling left out or inadequate by having everyone be the same. There is something slightly instinctively repulsive about the latter, but I don’t want to write it off as an inherently bad thing.


A counterargument to the criticisms against eugenics is that it’s all already happening. I previously mentioned medication and body modifications, but we have also been selecting for favorable genes since the start of humanity, without any fancy technology. Namely, all the dynamics of courtship are centered around advertising positive genes and selecting for them accordingly. Many of the attractive physical and personality traits suggest an effective mate either for one’s personal survival or their progeny. By this, we are all somewhat casual eugenicists. 


Another counterargument is that the force of natural selection is much weaker than it used to be. If a certain blueprint for a human did not work, that individual might have died before reproducing, thus ending the bloodline carrying that mutation. However, thanks to modern medicine, many people now have the opportunity to live despite their disabilities, though often with incomplete or expensive treatments that have side effects. Nevertheless, ailments that would have once stopped them in their tracks now persist in populations. The argument would be that the right dosage of gene manipulation could serve to filter out harmful mutations, using artificial selection to supplement the now weakened natural selection.


The long term, after AGI and ASI

For both the generated content and the personas that live among us, our sense of reality is at stake. Though at some point, it has to become normal, right? If we can no longer discriminate, we are thrust into maximal uncertainty, unable to conclude anything about our surroundings.


So I see three ways to cope:

  • Treat nothing as real,

  • Treat everything as real, or

  • Choose the things you want to be your reality.


The first may be a point of panic for many and a point of calmness for a few.


Treating everything as real can also be distressing, especially if many sources are providing conflicting information.


The third feels the most plausible despite it seeming like an unhealthy coping mechanism, and I'd argue it's already how we deal with a lot of things. Reality goes on, and we'll only witness the slice inducing the most tangible stimuli, generally that which is in front of us or deemed relevant. That which isn't part of your world can be written off as alien, far away, or unbelievable. For that which matters to you, it doesn't matter if it's AI-generated—it's real to you. We don't know if we're in a simulation right now, though we generally choose to live as though the world around us matters.


In a time where you can increasingly immerse yourself in the bubble that is your community, it makes for a convenient way to decide on what your truth is, and possibly, it may be the natural outcome just given the sheer amount of information there is to take in.

For once, we may get to decide our own complete reality to participate in, possibly even completely departed from real life, and that might be entirely healthy to do so. As it stands, I think we all participate in our versions of reality that are not entirely aligned with reality itself, but it's close enough that we can understand each other and agree enough on our observations. Reality is just the consensus of the majority: the common denominator or converging point shared across a wide distribution of highly unique interpretations.


California is a concept that exists exclusively in the human mind. The "Strict-Reality" is that Earth is a set of contiguous masses of land. States, countries, and other defined locations are all abstract constructs. These constructs hold weight because we agree upon them, and thus we have built our realities and experiences around them. They are simulacra whose existence is conditional on the intelligent life that recognizes them: if the life that observes them ceases, so do the simulacra. Laws, money, loans, governments, and many other human innovations all exist as symbols we agree upon. Each culture may acknowledge their own set of constructs, though they are often analogues to one another, and we can recognize cultures outside of our own. This applies to constructed fictional worlds, like that of multiplayer video games or the communities that arise in social media. They too may have their own hierarchies, norms, online personas, and rules that people take seriously enough to constitute their own reality, deviant from the real world. Presently, these realities are not "complete." We do not yet have the capability to exist entirely in these alternate realities; we're generally still required to participate even a little in the real world.


It becomes overt when our views clash and we disagree over how reality ought to be. We bicker over religion, politics, opinions, and more. A deviation in models of reality may be as simple as someone who sees a certain hobby as favorable versus someone who does not. Though in expectation, we can say whether a hobby is generally enjoyed by some percent of people. This averaging of many disparate understandings of reality gives us, perhaps, our best estimator of reality.


That said, with time, I think our interpretations will further fragment from each other as technology allows us to dive deeper into our communities and virtual worlds, each with their own rich, encapsulated lives.


To better visualize this, I currently imagine us vaguely as the image on the left. Each point represents a single human's notion of reality, where the averaged point in the center represents what we gravitate around. Outlier points could suggest that one's reality deviates significantly enough to be considered detached or ill. Increasingly we may move closer to something on the right where each pocket has its own set of rules and understandings. As opposed to everyone aiming to have mental models of the world converging on a common reality, there may be multiple abstracted worlds like MMOs, online servers, etc., to choose from. Points that may have been originally considered as delusional in the original context now have catered worlds to support their existence. When reality splinters, and the intelligences that construct all of the facets that define it are off participating in their own clusters, does there even remain a single unified reality to adhere to?



For our relationship with AI, in the long run, I think there are a few ways it could play out, assuming we successfully avoid the catastrophic versions of the future.


In one case, after periods of bashing heads, I think future generations will be so accustomed to advanced AI as part of life that it may be seen as cohabiting with another species, provided that it takes on a relatable form and does not disturb a power dynamic. It might be that every other friend you have is synthetic, and that's just how it is. It isn't even considered as a question when someone is in a relationship with an AI. It all seems absurd now, but this manifestation, or possibly something equally strange, will be someone's ordinary. All just is. As we'll discuss later, this would be the case where AI and humans improve at a similar rate and maintain unified goals, and this may possibly even result in a hybrid species.


In another case, AI surpasses humanity immensely and could take on a god-like role as something we look for answers to. In a way, we're already using AI to search for answers, but on the other side, it is still learning much from us. In the long run, we can imagine AI becoming so powerful that it does not necessarily have any need for humans, though we do somehow reign it in to being benevolent and thoughtful as it carries us through continued progress.



The power struggle between humans and AI

Humans and AI don't presently have much of a power dynamic. This is because AI currently lacks autonomy. I will say nothing about the presence of consciousness here, but there is nothing that suggests any kind of sustained sentience, consistent narrative, or uncontrollable behavior. Their behavior is at the mercy of a chosen pseudo-random seed we decide on. Right now, I see it primarily as a tool.

Maybe there is a bit of tension between humans and the abstract notion of technological progress and how this may affect lifestyle. Though I think we can agree a power dynamic isn't very much present between individual AI's and humans, given it presently remains as a text-generating assistant.


In the future, as AI is given more inputs, outputs, and a motor system for operating in the real world, a power dynamic could arise. It's unthinkable to imagine future AI being powerfully persuasive, deceptive, or power-seeking. Building something that historically has had unexpected emergent behavior and is intended to surpass us across all axes but at the same time is expected to comply with human orders feels inherently challenging. Despite lions or gorillas besting us in brawn, we still hold the dominant position by a long shot through intelligence. Meanwhile, AI could find itself miles ahead in brawn and brain.

AI has the potential to be the first time in history humans discover something above them in the food chain, created by their own hands.


I'm tempted to say that the alignment problem may be fundamentally unsolvable through traditional means. There's an awful amount of faith that goes into building something in the image of ourselves, creatures that act determinedly on their goals, to be far more powerful than humans, and expect we will be able to contain it. It might be possible to build a perfect benevolent superpower, but this seems very hard after observing how intelligent creatures interact with each other. I don't know of any time in history something that has gained substantial power has been entirely free of corruption and has the people's interests as their first priority. Granted, a constructed superintelligence, while it may mirror a lot of our characteristics, may also be quite different from us and not necessarily adhere to the framework we use to assess competitive species on earth. I will also concede that humans have been conditioned by evolution to put themselves first; this may or may not apply to AI. That said, if this framework is universal to agentic beings, then I would maintain that this remains a risky and very difficult problem.


So what's the alternative? Do we just accept our fate and continue blindly building our own demise? Not quite. I think the best way to ensure our survival is to make sure humans improve at the same pace as AI. Recall the human form isn't necessarily static. We wear gadgets, use phones as utility and external databases containing a substantial portion of our lives, utilize prosthetics, and more recently, developed complex technology allowing the brain to interface with technology like Neuralink. This will, inevitably, continue. Instead of AI growing independently, it is quite likely we will increasingly integrate with it. The mitochondrion was once an independent lifeform. At some point in history, the mitochondrion fused with a cell, and ever since, the two have operated as one. This deep symbiotic relationship paved the way to future evolution. I would call this solution alignment by unification.

Firstly, I want to discuss a brief ontology of different kinds of symbiotic relationships I have observed. Alignment is enforced more strongly moving down the list, though really this is a smooth continuous spectrum.

  • Alignment by specialization and trade

  • Alignment by equity

  • Alignment by unification.


Alignment by specialization and trade

This kind of alignment is described by a scenario where actors complement each other, have mutual dependence, and find a fruitful division of responsibility. However, each actor holds their own distinctly separate goals that generally each party pays little mind to. This is the classic "I scratch your back. You scratch mine." relationship. Helping your counterpart doesn't directly benefit you, it possibly even asks you to expend some of your own resources, but it prompts them to provide help in return as part of the agreement. Many human interactions, like trade, fall into this category. This relationship can be cold, like trade between two countries because of availability but still keeping the potential rivals at arm's length. It could also be friendlier, like how large companies that provide cloud services to startups may offer special benefits to help growth. A successful startup ideally means a continued loyal customer with larger spends. If I can help you do well, I'll do well. As this relationship becomes friendlier and mutual interests grow, it moves towards alignment by equity.


Alignment by equity

The relationship of alignment by equity consists of some of the previously mentioned interactions, but there is greater overlap between goals, and the success of each party is more intertwined with the other. Helping your counterpart directly confers a benefit to you as well. I think most symbiotic relationships fall here. Cats rid homes of pests which fulfills their own hunger but also benefits humans. Early relationships between humans and wolves may have involved team hunting efforts and shared resources. Bees specialize in different jobs to ensure the success of their hive, which translates to their housing and protection. Suddenly now, the parties have some stake in each other's outcomes. There is an incentive to improve each other so that all can contribute to the shared goal. I like to analogize this kind of alignment to startups accumulating employees, investors, and advisors through sharing equity. While it requires giving up a share of the pot, having the masses vested in your success is a powerful force and offers diverse support for a mission. The vision becomes less that of any one individual in the company and more a product of the whole. At the more extreme end of the spectrum, we begin to see some loss of individualism, deeply entrenched connections, and the construction of entities or superorganisms that represent the shared efforts of the involved parties. However, this is still distinct from alignment by unification in that constituents, particularly with intelligent beings, still have their own personal goals that differ from each other or that of the whole.


Alignment by unification

A unifying alignment describes an absolute merging. There is no space for goals to diverge because the separation between parties entirely dissolves, revealing a single unit; it takes at least two to disagree. For alignment by equity, goals of each agent are like a Venn diagram. There is a significant shared component that is the binding force of the relationship, but each agent ultimately pursues their own interests. Here, the diagram approaches a single circle. These examples are easy to overlook because unified relationships look like one organism. I mentioned the previous example of the mitochondria becoming one of the cell's organelles. Another is the cells that comprise your body. Cells can function as their own living unit, but in the body, they elegantly distribute responsibilities and forego components of their own autonomy to contribute to the whole. If the body lives on, the cells stay well-fed, oxygenated, and protected from harm. The whole is dependent on the parts as the parts are dependent on the whole. Admittedly, there are two caveats here that make unified alignment ill-defined. For one, the example of cells comprising a whole is not so different from how ants form a colony or bees form a hive; it's just that the individual bees and ants appear to have greater autonomy than cells. Secondly, while there are quite a few examples of unification at the microscopic level, I do not know of any occurring at the macroscopic level between intelligent beings.


For our AI development, anything less than alignment through unification is, in my opinion, highly risky. A multiple-agent model of symbiosis is fragile. As long as the relationship consists of distinctly separate agents, outcomes can affect them separately as well, and thus goals can grow in opposing directions.


I may have goals that are, at best, irrelevant to others if not conflicting. At the very least, we may prioritize differently; namely, we tend to put ourselves first. This tendency reveals itself in dilemmas where one must choose between serving oneself or the goals of others. Imagine a life-or-death scenario where only you or a friend could be saved. You might even choose the altruistic option, prioritizing your friend's life over your own. Nevertheless, you are forced to make a decision where each party receives a different outcome. Unequal outcomes, especially when one is positive and the other negative, can cause a loss of faith in the joint mission, put distance between those involved, and further cement a divergence in goals. Even slight misalignments in outcomes and goals can magnify with time. Any kind of divergence in goals, decoupled weakly-integrated parties, or other distinction of separation leaves the door open to one of the parties backing out or changing the terms of the social contract, typically with the more powerful party calling the shots.


Dilemmas like these can put stress on a symbiotic relationship. A change in circumstances, goals, power, or other unpredictable shifts in behavior can lead to a once friendly dynamic going awry. When the two parties are of equal fitness and nicely complement each other, the symbiotic relationship is a stable equilibrium. Not as much when one party increasingly has the upper hand and may even be repressed by the other.


Severance, I think, displays an interesting example of this. Although there is only one corporeal being, splitting the self leads to conflict over the freedoms each self is granted.


Presently, we are interested in improving AI to continue to solve our problems. In turn, we could say that AI, despite being in the pre-sentience age, is dependent on us for its improvements. Nonetheless, this dependency is tenuous. Given autonomy, the capability to rewrite itself, and the ability to seek out data sources, it's hard to say why humans would be considered worthy partners and be the ones in control. Our existence would hinge on the faith that AI's goals will never diverge from ours, which appears difficult to ensure, especially when AI is already trained on objectives that are approximations of what humans want rather than the actual desired outcome. Reward hacking reveals how even slight mismatches in goals can lead to failure modes.


Therefore it seems essential that either:

  1. Humans improve their form at the same rate of AI and try to ensure alignment by equity

    1. In other words, we allow goals to stay somewhat divergent, but the relationship is balanced by providing benefits to each other.

  2. Humans fuse with AI

    1. Ensure that the divergence of goals is an impossibility.


To me, #2 feels more plausible. Plus, some of the ways to improve humans may involve AI in itself. Somehow, we may want to ensure that AI remains dependent on us to a degree such that our success is also its success in every scenario with exactly equal reward or penalty.



Because I have no idea how two intelligent beings could merge, I imagine it occurring at a very granular level, still benefitting from the idea that parts comprise a whole. Specifically, I imagine a physical manifestation of model weights implanted in a brain to effectively blend in with the rest of the present neurons. As life does, it will either integrate it into its system, growing towards it, or reject it.


The ideal relationship, I think, is one where it simply appears as a singular organism, like the relationship between humans and their hand or their heart. If the hand is hurt, there is no place to say, "That is hand's problem, not mine." Instead it's "The hand is part of me, or is me; therefore, it's my problem. These parts are also not easily detachable, and pain receptors provide a strong incentive against doing so. The relationship is so deeply entrenched that it's hard to imagine any part of the body as separately existing from the whole.


If the human mind and the AI are each singular, which we don't even know if this is quantifiably discrete, then this would necessitate a dissolution of the walls of self to become one. That said, I personally believe the brain is a web of concepts representing the world in which it hallucinates a consistent, stable self for anchoring and best explaining our senses. Our best estimations of what this may be like may be inspired by model averaging or adding additional layers to a model.


We addressed alignment by engulfing the AI we intended to align. Instead of aligning AI towards ourselves, we let it change us, possibly meeting somewhere in the middle. When we come out on the other side, will it be what we wanted for ourselves? Humanity will survive, but will it still be "us?" Does it even matter?



In Closing

I expect the near term is a recipe for the uncanny valley. Given the distaste for AI as embodiments of large corporations, feelings around stolen data, fears about AI rebelling against humans, replacing human jobs, and annoyance around AI's typically submissive and robotic nature, it's not unreasonable to imagine protesting, bullying, vandalizing robots, and otherwise very mixed feelings about its role in human society. The future beyond this transition period can be a utopia or dystopia depending on how we play our cards but also by how we perceive the changes. Regardless of which path we take, our lives will look dramatically different. It will change at a pace that makes me question if the human psyche is ready for it, given how it can take multiple generations to accept progress, and some of what we'll see in our lifetimes will confront the most familiar parts of being human.


The reality is that progress is an unstoppable force that will find its way forward regardless of how we attempt to suppress it. With that said, the best we can do is learn how to enjoy the ride and prepare for our strange new lives.














Comments


bottom of page