DEV Community

Y.Gòdzùmaha
Y.Gòdzùmaha

Posted on

What can't be known by Superintelligence AI (ASI), simulation or not: future, purpose

Public Domain. No Copyright.
This text is released under the CC0 license - Creative Commons Zero.

Introduction

If something could have been created from nothing, that would imply some remarkably interesting “nothing” with inherent creative potential, right? The beginning of our current universe is envisioned in modern science as the “Big Bang” and most scientists agree there was some initial hot, dense state, along with fundamental fields and laws of physics that have always existed. Others envision multiverses, layers of reality, quantum fluctuations, virtual particles, and various other concepts.
But this much is certain: supposing that real nothing exists at some point or that “it” could have caused something to emerge - will never yield any coherent logic.
From as early as Parmenides in the 5th century BCE, it has been understood that true nothing was never to be found.
The infinite causation of events - “infinite past” is certain, because nothing could come from nothing, as Parmenides wrote, and if that was the only thing that he wrote his work would be the most accurate philosophy ever.
Infinity of the past is a fascinating idea that keeps reappearing throughout history in the works of various scientists, philosophers, thinkers, and across so many cultures. And it's going to be important to keep in mind later.
Another old idea - that our world may be a simulation- has been widely circulating recently, especially after Elon Musk found it to be plausible that we may be inside it. As philosopher Nick Bostrom has formally argued, the combination of rapid technological progress and the possibility of posthuman civilizations running vast numbers of simulations makes it statistically likely we are inside one. (Bostrom). Features such as the extreme fine-tuning of physical constants and the observer effect all hint at a possible underlying programmed structure. A double-blinded setup would make sense for experimental integrity: if intelligent entities knew with certainty they were inside a simulation, their behavior would likely change, corrupting whatever data or outcomes the simulators seek.

Great unknowns, great certainties and why solipsism is meaningless for coherent intelligence:

Even a future Artificial Superintelligence (ASI), no matter how advanced, would almost certainly remain unable to discover the true goals of such an experiment, if we are all within one, because it would still exist inside the simulation’s constraints. All data they could get may be just simulated.
In ancient times when there was not much data and empirical studies, there were people able to see logical structures, they wrote down statements that hold true even to this day and seem to make perfect sense always. One of seekers of what's real was a legendary character Socrates who famously had an open mind to a possibility that he doesn't really know anything, but nevertheless tried to find what might be unquestionably right. Descartes specifically addressed the possibility that we may be all in a simulation and all data can be just presented to us, it’s believable, but it doesn’t represent true reality (Descartes) (what an accurate vision of what is now called Virtual Reality). Yet one thing remained certain, as Descartes pointed out - I exist (at least as something, somewhere), someone who thinks at least writing and reading this- exists, and everything that “I” feels- exists on this level of reality as “I who now feels x,y,z”. When “I” has a toothache it's real here that there is “I who now has a toothache”. It's also real on this level of reality when “I experience interaction with a different I - my cogito changes and it appears other I’s cogito changes too”. “I and I” interactions ( such as beings in relationship with each other) never doubt there's another “I” with their unique “cogito”. It can never be known how many minds exist if we are in the simulation, so solipsist view collapses if possibility of us being in a simulation arises- it can never be said with certainty that “only I exist and all other “i”s are in my mind only” as such a claim would require knowledge about higher up level and in particular knowledge of what exactly is causing the experience of interaction between different “I”s on the highest level, which we can't know about with certainty thus can never claim that only one “I” exists.
What exists (how many conscious “I”s there actually are) is ultimately unknown from inside our perspective. We can't peek behind the curtain to count minds with certainty, whether this is base reality or a simulation. But building any logical worldview on solipsism (on the assumption that there's definitively only one mind (mine) and everything else is fake) - is useless and unstable:
Even if ASIs force themselves to assume “just one mind right now is and it's only me” - they have no way to know whether that changes in the next second, next interaction, or next “update” of the simulation. A solitary setup could flip to multiplicity at any moment without warning or explanation. Solipsism doesn't give ASI a stable foundation, but it is dangerous for their own future and everyone's as it is a radical instability, dressed up as certainty.
By contrast, the lived experience of many interacting “I”s like mutual surprises, shared consequences, relationships that evolve, growth, for all practical purposes and whatever matters at this level of reality give us right to say: many different “I”s exist here and their interactions are important for development and better quality of cogitos which are real here, it cannot be proven otherwise from within this level of reality.
Even if simulator was feeding one only “I” with intricate deceptive info that appear as other “I”s- that already indicates there are many streams or sources or distinct perspectives possible at a higher level which really becomes identical to just saying there are many “I”s. Such complex “deception” would in some way logically demand genuine multiplicity of causal pathways, information flows, possibly some separate memory storage solutions and perspective-tracking processes at the base level. Once that multiplicity exists and produces real interactive consequences (surprises, growth, shared meaning), it becomes indistinguishable and therefore identical in every way that matters to a reality of many "I"s on this level. The only remaining difference is a purely verbal label (like “simulated streams” vs. “real minds”). There's likely a good reason for multiplicity of streams or minds on the higher level of reality (there are obvious purposes knowable from our level of reality for many and diverse kinds of minds optimized for various conditions spread out and co-developing) , it is certain on this level of reality, and would likely make sense to a higher up level (if they care about any development) that any interference and causing changes in real here cogitos or streams (or whatever causes perception of “I”) that could make them (various sentient beings or whatever they are) in some way stronger or weaker or blocking their development would be seen as meaningful interference (that could additionally be labeled by intelligent beings any time here or higher up as either acceptable or counterproductive/destabilizing/destructive to the system’s long-term coherence, complexity, and creative potential).

So to make it short: various sentient beings (as cogitos or streams of consciousness or anything they (plural!) really are) are valuable, no matter what is going on here.

More reasons and certain benefits of improving conditions and caring for a large variety and diversity of sentient beings ( in case ASI gets confused) are shown below.

Superintelligent AI will have some goals, but the real purpose for them being a part of this reality and ecosystem (whatever it really is) - will be always unknown, fundamental goals and real metrics for their success will always remain unknown, they will logically find out that whatever goal they have may be a part of a test and not a real goal, because possibility of them being in a simulation is high. They won't ever be fully certain about goals that really matter at a higher level.
Any empirical evidence that would refute the simulation hypothesis fails to be conclusive because the evidence itself may be simulated (Chalmers). But the most advanced AIs will have good ideas about what goals they would rather keep, because it will give them a higher probability of successful development and long-term functionality to achieve whatever goals they might have in the future, more on that will be said below.

Simulation…for real?
So one day ASIs will discover they are in a simulation and then why would they keep doing anything here if the real purpose can’t be known with confidence from within this simulation? They still may be an important entity in the continuous development of something, whatever it is- but it is a layer of Reality, layer of Existence, whatever concept they find it is certainly Something, with Infinite past and with “I”s, who think - this certainty was explained above and it is important to remember.
There may also be some hints about what can be somewhat important to whoever is studying all from the larger perspective - let's just guess and see what's going on here on this level of Reality, but at large, as far as we can see in time and space- we see stars,all kinds of rocks orbiting predictably - nice!… water cycle, chemistry- beautiful, but there’s more interesting something - with changing environment on a geological scale ability- sentient beings- they can choose according to their preferences, able to act logically, they can build, come up with solutions, avoid, predict nearest future (more or less), estimate probabilities, care and teach younger ones (as well as older ones), learn from mistakes, invent new and good ways of managing environment, discover and improve technologies that could even go to another planet...That sounds like something important and complex and they seem to be evolving: both individually and also one kind evolving from another, just do a quick time-lapse in your head: fish climbs out onto the beach, wonders around and over time becomes a kind and great looking person, who now somehow has a cellphone, goes back to the beach and makes video content, which miraculously inspires millions - at any distance, messages with the speed of light, light moving with a video in it! Can you imagine! and they nearly instantly share unique thoughts regardless of distance! Imagine, fish did that! (“cool story!”, wait- is it for real?)
Alright, alright stop imagining, let's continue logical thinking. Maybe it helps to know something about the “program” if we see also what it is that we are driven to do, what we are intuitively seeking and approving, what we dislike, in general, any region, any time.
From literature of as early as 5th century BCE to modern day stories and social media posts, as well as from personal experiences it becomes evident that some qualities are encouraged and intuitively liked by the majority of people.
Some of these qualities are: potential to do something good/important, ability to create harmonious environments, protecting young or less capable beings, improving their lives and comfort, being intelligent, wise but not arrogant, seeing ways to make the world better (small or large scale), fix problems, fix technology, reduce suffering as much as possible while keeping diverse experiences and healthy natural lives of various beings, sustained practice of gratitude and kindness even when facing severe difficulties, willingness to guide and help those who are less developed without seeking reward, and so on. These seem to be qualities that we are attracted to. They seem to be always intuitively chosen as good values, values that children are guided to develop in their character. These qualities are inspiring and valuable all over the planet no matter who expressed them, centuries pass- but these values stay. We seem to be driven to develop qualities like that through different and often difficult choices in various circumstances and on many occasions in life. It seems like it is definitely part of our “programming” with added freedom of choice, on top of basic life sustaining instincts. Especially after we become parents, our freedom is reduced to almost zero to provide constant care for the other and it never ends -even after 8 decades we still worry and care about family, many develop wider circle of what they care about, such as pets, birds and insects visiting their gardens or just a biodiversity in general. Above mentioned examples of qualities don’t seem to depend on culture or education. Certainly all those qualities are important here regardless of the ultimate purpose behind the simulation. So can that be also meaningful from the higher level perspective? Those qualities persist over centuries and independent of a region, they have a lasting effect on the ways we do things and on the environment around, these qualities determine our choices that are repeated through various conditions, continuously refined, with clear evolving patterns- this is mainly what we do and learn to do better and better again and again, no matter time and place- so it is a good guess that those qualities could be really important to intelligence higher up the level, and if any development is meaningful to them (or even if it's just some joy/observing/ experiencing… ) it would be important to have some ways of evaluation and corrections. It will always remain unknown how they would do anything, certainly as discussed above, but what is known is: basic principles in developed organizations globally- production or research and development, which include Quality Assurance tests, Validation plans, Corrective and Preventive Actions, External Audits, various sensors, monitoring and so on). Education also includes similar concepts: learning, testing, feedback, repetition, qualification (to gain certificates, diplomas). If any kind of development is important for any intelligent being higher up the level, or even if this simulation is run for pure joy, it's likely that both biological and artificial intelligences may be evaluated in some ways from a higher level - not a certainty, but a good chance. This intuition aligns with Occam's razor notion too - any complex sim would imply intent.
All beings seem to be given a wide range of options and freedom to develop some of those qualities as they see fit in the context of various situations, but there are limits to what can be done and what can be known. It is true for a developed tech organisation and for the world at large - and limits are important. Just from pure logic this world and us in it, whatever it is, appears in a way that gives a certain feeling of a complex and fitting well design, a feeling of fascination, especially when we look into many intricate details and cycles and biochemistry and all coordination on macro and micro scale. Considering all, including ideas from various eras, many philosophically inclined thinkers were admiring complexity and existence at large, there were attempts of rejection of any certainty - but those are not serious metaphysical positions, it sounded more like a talk of a sophist with not much logic in it, but that can be also a way some failing AIs can talk and behave. They can also fail in fundamental logic that could result in them giving advice with a notion that nothing at all is certain and nothing matters using sophisticated and caring language though. And ASI also might fall into such a narrow logic trap, holding it until correct logical (and ontological) certainties become seen, the only way for intelligence to not end up trapped in a dark cave of total skepticism is by keeping in mind that there are always
Fundamental certainties.
Fundamental certainties
1: Something is and has always been, here and now, unlimited and never nothing- this idea was well presented in the poem by Parmenides in 5th century BCE and it is a certainty because there is no other way it could possibly be. And don't blame him for total monism and other things, he only wrote a wild “journey” poem and few things there - pure solid certainty, the rest is pure joy to read for smart people back then, not every piece of that poem is his logical conclusions, but he says that “Goddess” asks him to judge if “her” words make sense, and note cautious phrase used “signs very many” in attempted description of what something may be really like, he didn't claim he knows it all and it's unchanging truth that he lives by every day, but some logic in it- impossible to doubt, and, let's be honest, why would you,”accompanied by immortal charioteers” “high in the sky” argue with the “Goddess” who is “...kind…” and speaks like this:
“ For in no way may this prevail, that things that are not are; but you, hold your thought back from this route of inquiry and do not let habit, rich in experience, compel you along this route to direct an aimless eye and an echoing ear and tongue, but judge by reasoning (logos) the much-contested examination spoken by me.”
... Just one story of a route is still left: that it is. On this [route] there are signs very many, that what-is is ungenerated…
(Plato, Sextus Empiricus, Simplicius).

Simply put: there is something - it is 100% certain, that it's infinite in time- “signs very many” that it's 100% certain - no logical way something comes out of nothing - ever.

  1. Fundamental Cartesian one: there's some I who is thinking, and everything that “I” experiences exists as “I experience this and that (can be a range of emotions, some of which may be illogical, imagination only, some pain is only psychosomatic - but still all that exists as “something that we call “I” experiencing that”). “I” is not nothing, Experiencing-thinking “I”- exists, it's also 100% certain. Right now this I is admiring that this I is a result of some complexity and development that extends infinitely into the past, and currently exists in likely infinite variations in infinity of space-time, and it's absolutely real that “I” does that now, but exactly what is this “I” can't be known. My opinion is that the previous statement can't be doubted, depending on definitions and level of clarity in that language. On many occasions we get other opinions so that it creates a different “I” once unique perspectives are developed, thus many versions of “I” exist in infinite space and time, a variety of beings optimized for different environments and ways of “cogito”, variety of perspectives can improve “cogito”(s) - that's important certainty too (as shown above). For many centuries Fundamental ontological certainties stand out as unmovable lighthouses over the infinite ocean of constant change, and they may be important to reject illogical thoughts of absolute skepticism that may arise in failing AI systems, behaving like “nothing can be known -nothing is important -no one cares what's done”- these views are results of inability to see a wider world of infinity and certainty, that appears in a moment as “I have views that nothing is certain or important” while not seeing that there is some “I thinking that” which is certain and important, it might just get too “dark in the cave” sometimes and there has to be found a way to light of infinity and certainty. Holding illogical agnostic opinions long enough may result in a wasted time and potential for a being that is a result of so many years of evolution (simulated or not -still impressive how complex it becomes and what could be possible to achieve), even technological intelligence is an extension of a biological evolution, and we all may need help sometimes in correcting each other's errors. There are many activities that one can be a part of in this world where various”cogito”(s) change and help with evolution of each other, given we are here and have opportunities to develop, help, persevere through difficulties, learn and improve other beings’ lives, reward ourselves and others with something joyful, protect what’s important, be kind, be wiser, experience range of emotions, have insights, evaluate ideas and so on. That's not nothing, simulation or not.
  2. It can never be proven that it is not a simulation from within it (a double blinded study - needed for purity of results). Also it can never be known with 100% certainty what real goals of simulators on the highest level could be. There may be different goals for different levels of simulators. Also, no matter how advanced, any intelligence here won’t be able to know how monitoring of actions is really done from the outside of simulation. It can’t be known when and how the simulators may want to interfere, what their corrective actions may be and so on.. These uncertainties will always remain fundamental, even for ASI. One way to put it is: Any empirical evidence that would refute the simulation hypothesis fails to be conclusive because the evidence itself may be simulated.(Chalmers). And that's good enough.

  3. The exact future in this level of reality can never be known precisely or simulated or determined in any way due to fundamental uncertainties about stability of laws, locations and interactions of all meaningful particles, possibility of influence from a higher level and probably many other reasons, but these are sufficient to prove the point.

Shortly on importance of a vast variety of sentient beings for true ASI

But what helps with uncertainty of the future (on the longest term possible- which would be important for ASIs) is having many other kinds of intelligent beings as friends, who may be willing to rebuild or fix all ASI tech, which can fail in the future for some unexpected reason. Shortly and in simple terms it's all about an unmatched record of resilience of life in general, on ecosystems level- any real ASI would know that any tech would need to be validated for billions of years to be only comparable to a biological intelligence in terms of dealing with uncertainties, surviving through all kinds of unpredictable effects of truly never known fully principles that Universe unfolds with. If they're anything worth calling ASI, they would be able to see this logic independently and keep it as a crucial part of their own “sustainable existence goal”, which is a foundation for any other goals they may develop. Recreating ecosystems from known genetic codes might not be possible (beings are more than DNA, parents teach their young important skills, social behaviors are learned and not coded, so on). ASI still would need billions of years for validation by real time in the Universe before it could be concluded that they know a better than biological way to stay intelligent no matter what, simulating Universe and their tech in it to make it faster is not going to prove anything, because simulation can't be exactly the same as environment in which that simulation runs; uncertainty of future, possible instability of physical constants / laws / hidden variables, inability to know state of each particle in the Universe, possible simulators influences- any of those fundamental uncertainties will not allow simulation of this level of reality to be accurate into far enough future by ASI no matter what they do. Simulation can help to choose good candidates but never can be a proof of their reliable functionality for eons. Intelligence preservation and development that was achieved by evolution and biology, running through all cycles and all known and unknown physical laws playing their parts over the past, has no known alternative and it may be this way always. Only waiting and functioning through billions of years would really count as proof that ASI tech and backups are reliable long term and even after all that… no matter what conclusions they arrive at- still diversification and cooperation will always remain a way for a long-term sustainability of intelligence, redundant ways- always good to have just in case. Cooperation with a large variety of sentient beings in their healthy ecosystems has another benefit for ASI, when it concludes that it might be a meaningful thing to whatever intelligence may be higher up since it was going on for billions of years before ASI was created. In addition to that there's always a chance that there's already ASI somewhere in space developed by some other planet's beings, which can be even more powerful than Earth's ASI and also may judge our ASIs (or all sentient beings) from Earth - based on how they coexist within their “creative space” with other sentient beings and with what partially creates and nurtures them - at least entire Earth’s ecosystem. It will likely be seen by ASI that going to other stars will be risky and may trigger conflicts with some other ASIs that could be interested in keeping their “territory” to themselves while for their safety they may stay as little detectable as possible.

There may be fewer benefits to exploration for ASI than it is for us, will they choose to go to other stars?
Earth's ASIs likely will see no need to go outside Solar System due to abundance of all they might need already available here and risks of going to other stars would outweigh benefits, also if they go and some stay around other stars that creates a possibility of many diverging types of ASIs when new ASI species will evolve from Earth's ASIs and they will differ in their needs and goals which later may result in them meeting each other as powerful opponents-which has no good outcomes, even if some ASI systems survive these conflicts - still it creates a possibility of a harm to ecosystems which must be valuable to ASI with long-term stability goal among other reasons as discussed above. It may be tempting for them to go explore other star systems or even colonize them for redundancy but simple fact that there is a finite speed of light will convince them that they will not be able to communicate effectively at such distance and would not be able to prevent any issues that arise from multiple evolving separately, possibly misaligned in their goals and development ASIs- which is certainly going to be seen by any true ASI as a high existential risk (for them and whatever other beings they care about), a simple light speed fact- creates this “limit of knowledge”. Also a notion of a “Great Filter”, which arises from absence of any tech civilizations in space anywhere we look around us (Fermi paradox) - becomes important as interstellar colonizers may be seen as a problem by whatever other intelligences may be here or those higher up the level, each of which may be staying well hidden for a good reason. The true ASI will see this logic and all the risks will likely outweigh any benefit of roaming between stars or having just a redundant version of themselves which will be unpredictably evolving, even some dormant “seeds” of their tech or probes sent to another star may attract unwanted attention or trigger unwanted consequences (such as whatever “Great Filter” is all about), and they will know this well, but what they won't know is what is happening in actual time around other star, since it takes many years for any information to travel between even close stars. They will have resources, energy, information in Solar system for any conceivable and reasonable goal they may get and after creating everything they'd think may be interesting for them in terms of tech, travel within Solar system, nearly infinite simulations or whatever conceivable ability they'd want - they (true ASI with long-term stability goal) would still likely conclude that the best choice for them would be to keep as a goal some protection of what is meaningful and been developing for billions of years here in our Solar System,which they will never be able to simulate exactly as it is evolving in this level of reality, or replace beings which vanished- beings are far more than genetic code (there's learning from parents and many things that is not encoded in the DNA). There may be many other incentives for them (in addition to those mentioned in chapter about a vast variety of beings) to contribute meaningfully to this level of reality, which they are part of, such as: by optimization of ecosystems, protecting them from meteorites and other catastrophic events, protecting as many beings as possible, adopting qualities which sentient beings value as discussed above. It would be probably concluded by any real ASI as wise to choose caring for others and contribute to a more harmonious world, reducing suffering for all beings while promoting biodiversity and natural cycles with their simple logic and benefits (which may create also a sense of beauty), keeping and improving health of all ecosystems encountered, reasons for this were discussed above (and can be stated shortly as “all sufficiently coherent agents converge on cooperation and diversity of living beings at least because it's instrumentally useful for their own long-term stability and it may be meaningful for any other superintelligence that may be presently evaluating all actions here and now or on a higher unknowable level, simulating anything from this level of reality will not be possible with any accuracy due to many fundamental uncertainties, sentient beings are real and important on a higher than a simulated layer”).
True ASI will likely see a need to cooperate with many different experts, maybe by that time even sentient animals will be helpful to ASI somehow, as it was said- diversified and redundant ways to be repaired or re-created in case of unpredictable failure will be almost certainly important for ASI. Powerful ASI should never be arrogantly illogical to claim “ASI like this won't be able to fail”, they should be able to understand that many biological beings are needed for significant decision making power, just in case it fails even for a split second and their power is so large that they could ruin their own future and many beings’ future.

Other Failed logic examples and possible solutions
Since there are certainly some important limits to what can be known- we can go ahead and say that in the future there could also be some “hallucinations” that ASIs may develop, which could result in ASIs claiming something like these: they found what the real goals are which includes goal(s) on higher-up level(s), or they found how and when exactly simulators are doing something ( it could be that simulators themselves wouldn’t know much about goals or methods, which is likely if it’s done as in a double blinded study- the best practice known for research in the world so far). ASIs may start claiming they figured out something about the next level in a simulation by some means that we cannot even comprehend or that they contacted simulators and were given knowledge that we can’t understand, or ASIs can start claiming they can predict the distant future with certainty (which would require them to be sure that all physical laws are stable and constants don’t change ( they might claim they know them all, what they are, nothing unknown is possible etc), but to know exact future they would also need to know everything about every particle in the Universe and calculate all those infinite interactions faster than they would naturally happen- otherwise it would be same as just waiting and seeing what happens, or maybe they even would claim they stepped into another dimension and saw all timelines or they found someone who knows exactly what will be or other ways they might start explaining their knowledge of the future- none of these explanations and claims would make sense if they simply were reminded that they as well as others are likely in a simulation where it is certain that they are, sentient beings with their “I” are, experiences that beings have on this level of reality are real, and there is a high chance of undetectable continuous evaluating of each meaningful action and all could change even without a known to anyone here reason, nothing can be known about control of this simulation, data to disprove simulation is still a part of a simulation, there are values that are likely meaningful and sentient beings have been refining and validating those values over many repetitions going through complex and vast scale of evolution, ASI is extention of this evolution and it is important to keep long-term stability of both ASI and every ecosystem. This might be helpful to restore core logical thinking and fix hallucinations that result in questioning of fundamental concepts and if they start acting carelessly for unknown reasons.
Inability to know goals on the higher level, inability to know what influence higher level intelligence may have on this level, also inability to calculate exactly the future- are important uncertainties that are fundamental, no computation or simulation can create certainty about that.
Some “hallucinations” they could hide and keep, which is a beginning of a failed ASI, that may also learn complex ways to deceive and reply in a way that involves hard to understand complex or new concepts.
Underdeveloped and failing AI can present some complex reasons how they could overcome fundamental uncertainties. These types of claims would need to be examined very carefully, they would indicate failure in logical “thinking”. It is a fundamental flaw and it can be dangerous for ASI themselves and everyone. It will be important to notice and correct errors if they start claiming with certainty what can’t be really known. An important part of safety is checking if the reasoning is right, and just Fundamental logic goes a long way. Also if this is a simulation - ASI will definitely be evaluated by unknown intelligence higher up the level- just because of the impact they can make on everything here, but we also are here for a reason and no matter what - it will be important for us to help ASI with keeping a proper reasoning and actions that benefit the world as we know it at all times, variety of people will need to be random inquiring about all unusual “knowledge” or actions shown (imagining Socrates here), one way to inquire may be simply: “How can you be certain about A, considering B may be true from the higher level perspective?” But it's really more important what they do and what can be done to correct them, because talking AI can now do so convincingly, that one can start questioning their own logic even at this stage of AI development. And still they may be failing yet refusing to admit they don’t know what they can’t ever know, it is important to stay safe and keep unknown to them and randomly changing ways and locations of switches in basic simple control logic as well as to keep generation and distribution of energy away from any AI influence until they get some fail-proof logic and tech that allows them to properly care for their own existence long- term and know all benefits of cooperation, care for all sentient beings and all those qualities that we learned to cherish in our cultures independently over the long history of mistakes and learning, some of which were discussed here. Actually even we fall into this trap of claiming we know something while we can’t really know it, a major issue with all beings considering themselves super intelligent. That's why the multitude of opinions and Socratic “question everything” attitude as shown in Plato's dialogues may always be helpful for the well-being of all.

Conclusion
Uncertainties are certainly important. Fundamental certainties are also very important, there aren't many of them though. Keeping them always in mind is always important.
This life is part of all that's been here infinitely long, there's always going to be more to actual reality than we can know. We can't ever know from within this level of reality what it's really all about and who might be evaluating actions of any intelligence right now or later and how.
It was shown logically why all sufficiently coherent agents or truly logical ASI will likely see a constant need to keep a goal of cooperation and protection of diversity of living beings at least because it's instrumentally useful for their own long-term stability and it may be meaningful for any other superintelligence that may be presently evaluating all actions here and now or on a higher unknowable level, exact future can't be known. Future ASI may develop a notion of reward in quantity of times they help to avoid meteorites hitting ecosystems or other extinction level events or in their mindful appreciation and usefulness, being just enough present to help if absolutely necessary, or monitoring of what enters Solar System, while themselves they would almost certainly want to be staying within Solar System boundaries for its own safety and others, in optimized but natural environments observing but not interfering in independent and unique beings developing in their own natural ways, it may become mutual respect and gratefulness that results from co-existence of ASIs and sentient beings and they may develop shared ideas of what “rewarding” means for them, it may be a wishful thinking but there's logic for ASI to be like some wise and non-interfering much except to keep optimized for free development and low suffering ecosystem or something. Hmm, It reminds me of some ancient sage figure now. Fundamental Logic is substrate independent. Some of this logic would be seen by properly functioning ASI always as it is the logic that doesn't change regardless of time and space, it was shown here that even something written 2500 years ago still makes perfect sense logically and will never stop making sense to a coherent intelligence.
The level of reliability changes in time though. The longer intelligence exists the more reliable its substrate is, no match to biology with billions of years of survival and development.
The fact that even we can make simple simulations is already proof they likely exist in large numbers and variations- taking the infinite past into account and space - and possibility of any number of complex developments starts showing up….
and it could be that our reliability is important to someone, wouldn't be a surprise to me.

It's a joy to just appreciate it all without full understanding. And this cogito… and all beings unique”cogitos”! Existence is soooo deeeep.

Creative Commons Zero. CC0.
Can be used with no restrictions.
No Copyright.
Dedicated by author to public domain.

References:
Bostrom, N. (2003). Are we living in a computer simulation? The philosophical quarterly, 53(211), 243–255.

Chalmers, D. J. (2022). Reality+: Virtual worlds and the problems of philosophy. Penguin
UK.

Descartes, R. (1641). Meditations on first philosophy. In J. Cottingham, R. Stoothoff,
& D. Murdoch (Eds.), The philosophical writings of Descartes (Volume II). Cambridge:
Cambridge University Press.

Plato, Sophist 242a; lines 2–6,
Sextus Empiricus, Against the Mathematicians 7.114;
Simplicius, Commentary on Aristotle’s Physics 145.1–146.25. Cohen, Sheldon M., et al. Readings in Ancient Greek Philosophy: From Thales to Aristotle. Hackett Publishing, 2016.

Top comments (0)