As I said on the previous page a simulation designer or the designer of a realistic virtual reality environment would also have to define and apply strict fundamental ‘reality’ limits to avoid processor overloading.
A simulated reality environment as well as a virtual reality environment would UNLIKE A REAL REALITY present defined limits as well as uncharacteristic behaviour which researchers would find ‘odd’ and ‘unrealistic’ with respect to what they would expect of a real reality.
Interestingly all of the below have been put forward as evidence that we are in some sort of virtual reality BUT no one takes any notice – which again is what you’d expect to happen ‘IF’ we are being managed. A friend of mine Kay A. wrote most of the below as I’m more interested in focusing on the direct observable by everyone MACRO evidence.
Is The ‘Big Bang’ the Computer Simulation Booting Up?
The leading (most popular) theory on how our universe was created implies that it all arose from ‘nothing’ in a single ‘event’ before which ‘no-thing’ existed. The scientific equivalent of ‘let there be light’. Huge amounts of time and effort by highly intelligent people are put into describing how this may have occurred without making ANY connection with them doing exactly the same thing to the virtual inhabitants of their computer simulations every single time they press the ‘power on’ button.
So I wonder, did the universe start up with the windows or mac start-up sound?
Is the ‘Speed Of Light’ Specifically About Limit the Rendered Frame Refresh Rate?
An objective reality has no reason for a maximum speed, but every simulation has a maximum rendered frame refresh rate that limits local transfers.
A complex simulation requires many processing nodes in order to function. These can be imagined as the processing ‘cores’ in your computer. The more you have the more you can get done but the more time it takes for ‘data to move between them. So for any simulation hardware there is a limit to how fast information can travel between any two rendered frames and this WOULD be directly observable from inside the simulation for things IN the inhabitants environment.
Planck Limits: A virtual environment can have ANY degree of accuracy the designers wish to endow it with. However, the higher the detail that is used the more costly it is to build and run. Even if they use with simplification approaches (simplifying approximations to reduce detail) to make it cheaper there will always be an upper and lower boundary for detail in a simulation. These upper and lower will be directly observable by the inhabitants of your simulation and the LOWER ones will more likely be the ones observed ‘first’ as your inhabitants will very likely be ‘closer’ in scale to the lower ones than the upper ones.
At the highest levels of detail (the smallest size possible) then things will appear to be able to exist as discreet values. For example in binary you can ONLY have 0 or 1 (there is NO 2 or 0.5). In Octal you can have any integer between 0 and 7 (there is no 8 or 4.5). So at the level where the inhabitants are looking at the base numbering/measurement system there will not be any in-between values. There cannot be because there is no detail ‘beyond’ the simulation hardware defined and hard coded limit.
Non-Locality: Effects that instantly affect entities anywhere in the universe, like entanglement and quantum collapse, are impossible in an objective reality, but a software program can alter pixels anywhere on a screen, even one as big as our universe.
For information to travel from one ‘point’ to another there needs to be some connection between them. Doesn’t matter what it is or how it works but there HAS to be a connection.
In a ‘real’ environment that connection is likely to be observable and how the information travels between those two points would also be observable, you could interrupt it, change it on route etc. There is ONLY one frame of reference, one ‘path’ for the information to travel down, the ‘real’ one.
However in a simulated environment you have TWO co-existing frames of reference. As in a ‘real’ environment you have the one from the simulated inhabitants perspective that is rendered frame by frame including information travelling from one point to another. They could watch it travel, change it, intercept it etc.
BUT you also have the frame of reference outside their perception. Information that travelled ‘between’ frames. From the inhabitants perspective information travelling that way would be instantaneous as any time spent travelling would be invisible to them as it happened between their rendered perceptions. So non-locality and a huge range of quantum ‘weirdness’ effects are the result of us observing ‘out of reference frame’ events. While we can observe their cause-effects we cannot become aware of the ‘connection’ between the two and it makes zero sense in comparison and seems to contradict our ‘in frame’ models of the universe. They are inherently incompatible as they are based in two totally different realities!
As an aside if you also consider ‘energy’ as virtual information, or ‘electrical resistance’ as a virtual property then super conductors and their apparent ‘zero’ resistance is once again us observing things being done by processes outside our virtual frame of reference. Energy is transferring between points faster and with less resistance than should be possible.
Incidentally these kind of effects would only become apparent and observable with a high degree of technological advancement so you would want to limit your inhabitants technological capabilities before they really started to understand and observe these things.
Mass: Why do objects have Mass? Well, in a simulation every object you add to an environment increases the object-object interaction checks exponentially depending on their distance. 1000 balls in a small bag means each ball has to be check against every other ball to see where they are going to be on the next ‘frame’. BUT if you have the same 1000 balls in a football stadium there is far less chance of them coming into contact…and so less calculations are needed.
You can simplify these mathematical calculations greatly by considering ‘objects’ as perfect spheres with a point at the centre that holds information about how ‘big’ they are coupled with how likely they are to impact other objects. For example a big dense iron elephant sized truck would have a greater ‘interaction value’ than a small polystyrene ping pong ball. Just for the sack of argument let us call this interaction variable ‘mass’.
So if you fired 10k ping pong balls at your truck the vast majority of these interactions are going to be unobserved so it becomes a MUCH cheaper calculation to do a simple ‘total mass’ exchange of ‘movement energy’ along with some ‘random’ directions for the low mass ping pong balls coupled with a single calculation for the large mass object.
An objects ‘mass’ value then becomes a generalisation for how ‘significant’ its impact will be in any environment. Ping pong balls fired at a truck…truck wins as it is less impacted by them… but fire 10k trucks against a super tanker…super tanker is the more ‘significant’ object and the ‘trucks’ go spinning off into random directions.
The simpler you can make calculations the more efficient your simulation will be but it comes at the cost of accuracy and can create directly observable weirdness. Especially to the inhabitants when they are trying to understand their world…the current scientific ‘theories’ only make sense if you include particles WITHOUT MASS or even MASS without particles.
Then of course why does ‘gravity’ which is what mass-mass interactions are called here function totally differently to the other forces? Why does ‘gravity’ seem to sit ‘outside’ any functioning theory?
Malleable Space-Time: Mass and movement shouldn’t alter time or space in an objective reality, but in a virtual reality, the processing load of a massive body could dilate virtual time and curve virtual space, given that time and space also arise from processing.
Why does an objects Mass distort ‘space-time’?
Well, as written above an objects ‘mass’ is how significant it is in any calculation. For VERY complex simulations you could spend ALL your processing cycles working out object-object interactions…this becomes a hard limit for this approach in terms of how ‘big’ your simulation can get. Basically the cost formula is “number of objects x number of objects”…which gets very big VERY quickly!
But what if, instead of you working out every single object-object interaction for every object in your environment you only had to work out one calculation instead.
Imagine instead that you had an elastic 3D grid that had each point of it taking on the mass values of the objects at any position within it. Now you only have to calculate the grid-object interactions which are “number of objects x 1(grid) x 2(object->grid + grid->object)”. You have to do the grid/object interactions twice because the blank ’empty’ grid has to add up all the mass values of all the objects and then all the objects have to inspect the grid to see which direction they are ‘pulled’.
It is a little more complicated than this but the moment you have more than 2 objects in your environment the ‘grid’ approach becomes cheaper. For the mathematicians and computer programmers it changes from being an “n squared” complexity problem to a “2n” one.
From the ‘inside’ of a simulation though this simplification approach would give rise to observable effects that objects distort the underlying ‘space-time’ of the universe! When in fact the virtual universe is distorting its calculation ‘grid’ to make interaction calculations ‘cheaper’! Cool huh.
Inertia: Another simplification method can be used to reduce this ‘cost’ of objects in motion. You have a ‘high detail’ check along the direct flight path and a ‘low detail’ check along the possible flight path (which is much wider). In order to allow for objects to move faster while keeping the same processing ‘costs’ then the faster the object goes the ‘thinner’ the two cones need to become…which means the ‘less’ deviation you can allow for the ‘low detail’ possible flight path cone. Which translates to ‘the faster an object is going, the harder it has to be to change its direction AND this has to be in proportion to how much it impacts the world around it…i.e. Its MASS’.
Superposition, Probability Wave Function Collapse, Quantum Tunnelling and the Casimir Effect
These are all closely related. To reduce the calculation complexity of a simulation you can ‘blur’ or ‘simplify’ any object into its most likely states and interactions. In doing this then objects stop behaving like ‘objects’ and start to behave like a probability distribution of their properties instead. So things can ‘spin’ in two directions at once and they only ‘decide’ what they are going to do when interacting with another object or an observer to ‘collapse’ the probability wave function into an actual ‘event’.
In non-scientific terminology Instead of imagining a particle as a solid ‘thing’ imagine it as a ‘cloud’ of ‘maybe’. The particle ‘might be here’, it ‘might be there’, it ‘might be doing this or that’. All the things that it ‘could’ be doing are defined by a maths function.
The most famous of these thought experiments was Schroedinger’s Cat where the cat was both alive AND dead at the same time. The actual state of the cat (very pissed off moggy) would only be known when you opened the box. i.e. when you ‘observed’ it.
An interesting effect of this is that in a simulation that uses this approach small discreet particles (one single thing) gain the ability to exist and not exist at the same time OR can ‘jump’ or ‘tunnel’ from one place to another. Basically ’empty’ space becomes a sea of possible existing particles that ‘could’ pop into existence at any time out of ‘nothing’ . . . and then disappear again.
Interestingly that ‘cloud’ is unaffected by any solid barrier it comes into contact with so IF that barrier is thin enough part of that ‘possibly, maybe here’ cloud would be on the other side of it and miraculously when you decide to measure where the particle actually is its ‘cloud’ could collapse on one side of the barrier or the other. It would do this without interacting with the barrier at all!
Quantum Tunnelling: An electron “tunnelling” through an impenetrable field barrier is impossible for objects that continuously self-exist, like a coin popping out of a perfectly sealed glass bottle, but a program entity distributed across many “instances” can restart at any one of these instances, i.e. “teleport”. If the world acts like a virtual reality not an objective reality, then the duck principle applies: If it looks like a duck, and quacks like a duck, then it is a duck.
Superposition: Objective entities shouldn’t be able to spin in two directions at once, as quantum entities do. However, in a software based reality then a program can instantiate twice to do this.
Equivalence: That every electron in our world is exactly like every other is unexpected for an objective world. However, every electron could be being software generated based on the same basic code template. The complexity of a simulation is directly related to how many different objects you have within it. It is a very simple and effective approach to have lots of objects use the same ‘data’ as you can pre-calculate it and store it. Then instead of having to calculate things each and every time you need to render that object you just copy and paste it from a database. This is especially true of the very ‘small’ and ‘numerous’ things. Every electron is the same as it is based on the same base ‘data’.
Simulation Argument Evidence Research Papers & Links Discussing or Expanding on the Above
1. The Virtual Reality Conjecture:
2. Evidence that our reality is Digital, Evidence for a digital consciousness:
Web site page here. A quote from this page . . .
“Once a subatomic particle is observed, the program must then establish properties for that particle, effectively resulting in the collapse of the probability wave function. In 2008, the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna, determined to a certainty of 80 orders of magnitude that objective reality does not exist by itself and only comes into being when consciously observed. Thus, the uncertainty of this result is 1 in 100000000000000000000000000000000000000000000000000000000000000000000000000000000, clearly a ridiculously small number. This effectively put a nail in the coffin of the last hope for objective realists.”
3. Abstract: Quantum Mechanics Implies That The Universe is a Computer:
Web site page is here. A quote from this page . . .
“Whatever the math does on paper, the quantum stuff does in the outside world.” That is, if the math can be manipulated to produce some absurd result, it will always turn out that the matter and energy around us actually behave in exactly that absurd manner when we look closely enough. It is as though our universe is being produced by the mathematical formulas. The backwards logic implied by quantum mechanics, where the mathematical formalism seems to be more “real” than the things and objects of nature, is unavoidable. In any conceptual conflict between what a mathematical equation can obtain for a result, and what a real object actually could do, the quantum mechanical experimental results always will conform to the mathematical prediction.”
4. Constraints on the Universe as a Numerical Simulation:
A scientific paper published co-authored by Silas Beane published here. Which is explained more simply on this page here. which is worth reading as it discusses all sort of evidence that we are not in a real world. A quote from this page . . .
“It turns out, our “reality” is also pixelated, just at a very fine resolution. This study out of Bonn revealed that the energy level of cosmic rays “snaps to” the “resolution” of the universe in which we live. The very laws of electromagnetic radiation, in other words, are confined by the resolution of the three-dimensional simulation we call a “universe.”
5. Thinking as Simulation of Behaviour: an Associationist View of Cognitive Function:
This paper pointing out some interesting internal ‘simulation’ aspects of ourselves is here and a quote:
Perhaps the most exciting aspect of internal simulation is that it suggests a mechanism for generating the inner world that we associate with consciousness. There are many problems of consciousness, but surely one of them concerns the existence of an inner world of experience that does not immediately depend on external input. How does this inner world arise? The simulation hypothesis provides a simple and straightforward answer. Since simulation of behaviour and perception will be accompanied by internally generated sensory input resembling perceptions of the external world, it will inevitably be accompanied by the experience of an inner, non-physical, world.
General Hotchpotch of Newsy Simulation Argument Evidence & Discussion Pages:
- Physicists May Prove Our Universe is a Computer Simulation
- Simulation hypothesis – Wikipedia, the free encyclopedia
- Simulation Argument: RationalWiki
- Proof Of The Simulation Argument
- The dark side of the Simulation Argument
- We don’t live in a simulation
- Scientific Proof We Live In A Simulation
- Physicists May Have Evidence Universe Is A Computer Simulation
- Physicists say there may be a way to prove that we live in a computer simulation
- The simulation hypothesis and other things I don’t believe