Full Motion Interactive Simulation

By Robert Beken

Presented at TILE (Technology in Leisure and Entertainment) 1993, Maastricht Exhibition and Congress Centre, The Netherlands


Full motion interactive simulation rides are now the technology leader in entertainment projects worldwide and represent the largest future revenue source. A successful full-motion interactive ride must incorporate vection stimulus techniques discovered during the last fifty years of flight simulator research. These techniques are discussed and the pitfalls of not incorporating these techniques are also covered. These rides may be divided into three groups and each group is discussed and compared.


A year ago interactive entertainment systems were barely more than gleams in the eyes of entrepreneurs the world over. Yes, there were some here and there but most were delicate and primitive laboratory demonstrators rushed to market too soon.

Just one year later we are faced with possibly a dozen systems that have all been designed for continuous use by the general public and which offer as much as a hundred-fold improvement in visual representation.

Before I discuss the capabilities of the new genre of interactive rides, it is important that we all have a firm grasp of the basics — what do we really need to make a ride feel interactive.

The most important visual research results have not been shaken from their prominence:

The imagery to the front of the rider is not important. It is the imagery at the sides — out the corner of the eye — that has the greatest effect on “feeling” motion.

The imagery of objects perceived to be close by and to the front is not as important as objects perceived to be far away.

The more complex the scene and the more obvious the objects, the faster motion will be “felt.”

Continuous visual cues are not as important nor are they as effective as brief visual cues.

The optimum rate of oscillation of visual cues is around .25 hertz.

I am certain that you have gleaned much of the same from your personal experiences in movie theaters and vehicles.

Visual cues are quite important — they create at least half of the vection — or feeling of motion — that you experience. Yes, half — and your actual movement within a motion system accounts for less than half — with other cues making up the balance.

I cannot stress this too much. The most critical requirement for imparting the feeling of motion is an appropriate set of visual cues.

I will return to this topic in a few minutes with specific examples …

Let’s now discuss how we “feel.” There are two systems that we use to “feel:”

  • Vestibular System
  • Haptic System

Vestibular System

The vestibular system is that which is contained within the inner ear. It is this system that many of us think contributes the most to our feeling of seasickness. But this is not true.

Within the inner ear, we have two sensor systems. The first determines the angle at which your head is positioned. This system helps your brain understand what is required to keep your eyes positioned as needed. This first system is called the Semicircular Canals.

The second measures acceleration and direction of motion. This second system consists of the saccule and the uticle.

Thus these inner ear sensors provide substantial information about our environment. Not only the angle of rotation of the body but the rate of rotation.

These sensor systems have major flaws — they do not operate at angles greater than 45 degrees and they cannot detect or measure subtle motion.

What this all means is that motion sensed by your internal but head-mounted sensors do not really give you as much information about the outside world as you think. It also means that in some ways these head-mounted sensors can be a detriment to your awareness of the world around you.

It is the sloshing of the fluids to less frequented corners within these sensor systems that lets you enjoy a rollercoaster for several minutes after you have left the ride.

There is another set of sensors that operate a bit more rationally. These are the sensors of the Haptic system.

Haptic System

The Haptic system consists of all of the sensors throughout the body which provide touch, pressure, and vibration information.

Numerous sensors are part of the Haptic system. I will discuss them briefly — but enough so that you can appreciate the complexity of the systems.

Pacinian Comuscles These receptors sense vibration and other high-frequency stimuli. They can respond to movements as small as ten microns. They seem not to adapt.

Merkel’s Disks These receptors sense skin pressure. They adapt to stimuli within about 30 seconds. You can easily put some into operation by placing your wristwatch on the other wrist and timing your adaptation to the new stimuli. Or simply look down at your watch right now and think of a good reason why you see it but can’t feel it.

To put a price tag on the experience — if you had a wallet in your back pocket this morning can you with utter certainty state that it is still there? No. Unless you move just a millimeter — and that’s even when you are sitting on it!

Ruffini’s End Organs These receptors also sense pressure. But because they do not fully adapt to stimuli they provide the brain with information about the position of limbs and the torso.

Now let’s discuss how all of these systems can be deceived so that you can enjoy — or at least experience — a ride.

First, let’s replace the term adapt or adaptation discussed above with the word forget …

Yes, the secret of motion systems is the fact that the human body forgets where it is — and does it quickly. And that because the human body forgets so much so quickly we use the onset of the stimulation to create the most effect and need not continue the sensation beyond a brief experience.

We can create a motion system that would provide positive motion for a few seconds and then taper off and then slowly — very slowly — below the threshold of the body’s sensors — return to its original position ready to impart new stimuli.

This all works so long as the position of the cockpit remains within certain limits — no radical movements and no inverted operation. And this is in fact how most commercial aircraft flight simulators work. This is also how certain very expensive rides operate.

Standard simulators — where the angular displacement is less than 30 degrees and the linear displacement (measured at the center of the cockpit) is less than 3 feet and the rate at which movement occurs is low — can use such motion systems.

When the experience must offer full inverted attitudes and more — displacements of 180 degrees and rotational acceleration rates of 300 degrees per second squared and greater — then other methods must be used. To create these experiences engineers have discarded motion systems per se and have implemented hardware that ignores the Vestibular system and focuses on visual and Haptic system cues. Haptic system cues are affected with “G” seats.

Let me repeat this. Engineers and psychologists have found that to simulate high-performance aircraft they need not stimulate the vestibular system, they need not actually move the passenger around. They have found that the visual scene combined with Haptic cues are enough. They can use these special “G” seats that affect the Haptic sensors.

“G” seats give you that feeling of pressure against the skin that would be appropriate for a given external motion.

Often these seats will include pressure on harnesses, seat backs, seat bottoms, and sides. The seat then applies the appropriate Haptic cues for the displayed visual motion cues.

In any of these systems — motion and non-motion — it is important that the person experiencing these cues can fit these cues within his own experiences. This means that his body has learned to expect certain limits in position and in rates of acceleration it also means his body will not take kindly to abrupt departures from its known limits.

In addition, the body has learned that there are certain relationships between the visual cues, Vestibular, cues and Haptic cues. These relationships include not just positions and forces — they also include temporal relationships. These cues must occur within appropriate time windows of each other.

If all of these wonderful new effects are not within the given person’s experience then their body might just decide that the only possible reason for the disparity between what is seen and felt is some form of chemical change within the body — caused by poisoning. This normally means the body will make every effort to evacuate any internal containers that could hold such poisons … Any internal containers …

The body can be trained to reject these conclusions and accept new relationships. Sometimes this training can take days, even weeks. Some people are never successful at adapting to these stimuli.

We are all lucky in that most people will not become ill from such experiences in less than two or three minutes.

I will never forget the first ride my son received in a shopping cart. It lasted from the front of the store and down two aisles to an eruption of monumental proportions — maybe one minute ten seconds into the ride …

Thus, it is important that the cues be appropriate to the person’s experience and also that they be in phase. Because the body operates with an internal clock rate of about ten hertz we need only be within that tenth of a second window to keep things somewhat in phase and minimize en masse eruptions.

So in review, we now know that the visual system is the most important effector of motion to our brain. We know that the wider the field of view the better — and the quicker these feelings of motion will be sensed by the brain. We also know that sharp images with lots of dots, lines, and other abrupt changes in the image cause the greatest feelings of motion. We also know that given a choice it is better to offer wide expanses of imagery combined with a “G” seat than narrow fields of view and unusual motion.

Lastly, we know that dire consequences may result from poorly designed interactive rides — with near-certain catastrophe if they are continued beyond a two- or three-minute experience.

With all of that as background let’s now look at some of the options available to a park manager today:

Today’s Interactive Ride Systems

There are three general categories of ride systems that I will discuss:

  • Non-motion 120 + Degree Field of View
  • Motion 40 + Degree Field of View
  • Motion 120 + Degree Field of View

Non-Motion 120 + Degree Field of View These systems include those that offer virtual reality helmets and large format projection systems.

For creating a startlingly real visual experience Virtual Reality must be given the nod as the offering the best possible future. Present problems with such systems really are limited to visual resolution and head motion sensing delays. These problems will be solved very quickly. These systems as yet do not offer wide fields of view and depend upon head movement to increase vection.

Some vendors discuss the effects of 3-D and how Virtual Reality systems can offer such full 3-D effects. Most entertainment experiences will occur with visualized objects more than 15 meters from the viewer. Beyond about 15 meters the benefits of 3-D drop off quickly. An aerial dogfight experience will gain very little from 3-D except during mid-air collisions.

The alternative to helmet-mounted display technology is a widescreen or multiscreen display. The ride leader in this technology is certainly the Hughes Mirage. The incredible imagery combined with collimated optics and wide field of view makes this a tremendous experience — if you have the money.

The Hughes Mirage does not provide motion. It is a fixed base system. It is also the only interactive ride manufactured by a major flight simulator concern. This again brings us to my first comments about the importance of visual cues. Maybe these people at Hughes know something …

Motion 40 + Degree Field of view There are two such systems on the market today that offer motion and a narrow visual display. In both cases, the visual display is limited to the front view only. Research study after research study suggests that visual cues entering the eye from the periphery are critical to vection. Certainly, the cost of the image generation electronics may have been a major contributor to the decision to limit the field of view but such systems are severely limited.

It is also possible to overpower errant physical cues with additional visual cues. This means that if because of mechanical constraints in a fully interactive system some license is taken in effecting certain maneuvers the deleterious effects on the Haptic and Vestibular sensor systems can be ignored if sufficient visual cueing is available. With wide visual panoramas, the visual system takes priority and saves the day.

Conversely, it means that in the narrow field of view systems like these one must be very careful to keep the ride length to under well under three minutes or install automatic washing equipment.

In addition, there must be significant thematic constraints placed upon such rides. For car race experiences, skiing, bobsled, and the like, such rides can be quite a sensation. When they are used for aerial combat — where views of wide expanses of airspace are critical — they may be less effective.

This would certainly be true for someone having had multiple rides where the “tunnel vision” of the visual display begins to be noticed over the newness of the motion experience. There are “crutches” that can be used — Radar screen displays and the use of missiles rather than close-in guns.

Motion 120 ± Degree Field of View Certainly the most incredible ride — for visual experience — is the Mirage from Hughes. If it were possible to take that ride and move it — not just like a Boeing 747 flight simulator but like a jet fighter in high “G” maneuvers including full inverted flight — then we would have arrived.

There are three such systems available that provide that experience. One is a Virtual Reality system and the other two are widescreen projection systems.

Because of the limited field of view of the present Virtual Reality helmets, it is not yet possible to impart the same degree of vection as can be experienced with widescreen projection systems. Still, these rides are less expensive than widescreen projection systems and might fit a niche.

The low traffic volume of a Virtual Reality system combined with possible health considerations due to the common-use helmet contribute to the design’s special role in entertainment.

The remaining two systems depend upon widescreen display technology to impart the maximum possible vection. In addition, one of the systems includes a “G” seat as well as pitch and roll.

By including a “G” seat the engineers have allowed the system to save full motion for coarse maneuvering while all fine movements can be effected with the seat alone. In addition, the “G” seat allows buffeting effects and shell impacts which would be difficult if the entire cab was moved.

Both of these systems offer pitch ranges of 50 degrees or more and continuous rolls in either direction.

As with other full-motion systems discussed above these systems are unable to create the exact feel of certain maneuvers. In these cases the wide screen — 120 degrees or more — lets the rider’s brain accept as valid what he sees rather than what he momentarily feels. In many cases, the rider is unaware that he did not actually perform the maneuver.

One good example is a loop. The machines can pitch up rather dramatically but then must execute a roll to attain inverted operation. With the widescreen picture telling the brain a loop is being executed the brain is less aware of the roll. Aerobatic pilots have commented that they thought they had rolled and yawed.

The end result though is an experience that can be tailored to nearly any venue. From auto racing to piston-powered aerial dogfights with guns blazing, the widescreen imagery and motion systems deliver an experience well worth more than the ticket price.

To be certain that the effects are coordinated, the motion dynamics of the simulated vehicle are calculated every 16 milliseconds. The position and attitude of the simulated vehicle are calculated and then the G-forces experienced by the rider are calculated.

The simulator can be commanded to assume the attitude of the simulated vehicle or to create the appropriate vection cues coordinated to the visual scene.

The maximum G forces to be experienced by the rider can be set into the system to maximize the rider’s safety. This is accomplished by adjusting the simulation model control inputs (the effects on the computer model from the stick, rudder, and throttle inputs). Thus the system can be dynamically tailored to a wide range of riders.

The computed position commands are available to the servo systems at the completion of each 16-millisecond cycle. The servo controllers operate in a one millisecond control loop which provides very accurate position control. It is possible to provide various parameters to the servo systems so that the systems offer a one-millisecond effective control rate.

All three of these systems can act as stand-alone simulators or as a node in a network. They are equipped with full Ethernet networking hardware and software to allow for master control of all simulators in the network and to allow for cooperative activities among simulators. All of these present systems accept six simulators in the network.

The network also allows for the use of a database computer node so that performance data can be maintained on each simulator and each rider. The system can maintain data on a rider for several years — even at high traffic volumes.

To my knowledge, none of the systems I have discussed — except the last two — offer a sophisticated pseudo pilot aggressor. This means that most of the systems are dependent upon multiple linked cockpits as the source for “enemy” or competitors. This can make the up-front cost of such rides quite high.

Pseudo Pilots

In the case of the last two rides arrangements have been made to use Greystone’s Advanced Maneuvering Logic. This software is used in the most sophisticated dogfight simulators in the world. It was even used by the Defense Applied Research Projects Agency in a Northrop simulator to fly the airplane and evade two attacking missiles while the pilot was incapacitated.

This software has grown over the last twenty years and one of its last versions is called AML-EXPERT IVAN …

This software is one of the most incredible packages available and its application to entertainment assures that anyone — even highly skilled jet fighter pilots — will be given the adrenalin-loaded ride of their life.


In summary, we have entered a new age. Not only do we now have full motion rides with widescreen out-of-window views but we have sophisticated computer intelligence to match the skill level of not most but veritably all of the possible riders.