For all the examples that have a Simulink® model, use the following procedure to run the example and view the model:
In the MATLAB® Command Window, enter the name of a Simulink model. For example, enter:
A Simulink window opens with the block diagram for the model. By default, a virtual world also opens in the Simulink 3D Animation™ Viewer or your HTML5-enabled web browser. If you close the virtual world window, double-click the VR Sink block to display it again.
If the viewer does not open, double-click the VR Sink block in
the Simulink model. In the Simulink
3D Animation Viewer, from the Simulation menu, click
Block Parameters. A Block Parameters dialog box opens. The
Open viewer automatically check box should be
selected by default. When you double-click the VR Sink block, this selection enables
the virtual world window to open.
In the Simulink window, from the Simulation menu, click Run. (Alternatively, in the Simulink 3D Animation Viewer, from the Simulation menu, click Start.)
A simulation starts running, and the virtual world is animated using signal data from the simulation.
The following table lists the Simulink examples provided with the Simulink 3D Animation product. Descriptions of the examples follow the table.
|Example||Simulink Coder™ Ready||VR Sink||VR Source||Joystick||Space Mouse|
example shows how virtual collision sensors can be used to interactively control the
simulation and to change the appearance of virtual world objects using Simulink® 3D
Animation™. The example represents a simple unmanned aerial vehicle (UAV)
The UAV competition scene is based on the IMAV Flight Competition held in 2013 in Toulouse, France. ( http://www.imav2013.org )
sl3dex_rigidbodytree example demonstrates the functionality of
VR RigidBodyTree block. This example requires Robotics System
The VR RigidBodyTree block inserts visual representation of a Robotics System Toolbox RigidBodyTree object in the virtual world and displays it in the virtual reality viewer. During simulation, the rigid body tree is subsequently animated according to the configuration defined in the Config input.
In this example, the manipulator configuration is provided by the Robotics System Toolbox Inverse Kinematics block. You can use the sliders to change the robot end-effector position and orientation about one axis.
vrbounce example represents a ball bouncing from a floor. The
ball deforms as it hits the floor, keeping the volume of the ball constant. The
deformation is achieved by modifying the scale field of the ball.
vrcrane_joystick example illustrates how a Simulink model can interact with a virtual world. The portal crane dynamics are
modeled in the Simulink interface and visualized in virtual reality. The model uses the Joystick
Input block to control the setpoint. Joystick 3 axes control the setpoint position and
button 1 starts the crane. This example requires a standard joystick with at least three
independent axes connected to the PC.
To minimize the number of signals transferred between the Simulink model and the virtual reality world, and to keep the model as simple and flexible as possible, only the minimum set of moving objects properties are sent from the model to the VR Sink block. All other values that are necessary to describe the virtual reality objects movement are computed from this minimum set using VRMLScript in the associated virtual world 3D file.
For details on how the crane model hierarchy and scripting logic is implemented, see
the associated commented virtual world 3D file
vrdemo_panel example shows the use of sensing objects that are
available in the 3D World Editor Components library. These objects combine virtual world
sensors with logic that changes their visual appearance based on user input. The sensor
values can be read into Simulink by the VR Source block. The logic is implemented using VRML Scripts and
The control panel contains a pushbutton, switch button, toggle switch, and a 2-D setpoint selection area. Outputs of these elements are read into a Simulink model and subsequently displayed using standard sinks, or used as inputs of blocks that control back some objects in the virtual world.
Pushbutton, switch button, and toggle switches have the state outputs, which are of boolean type. Their values are displayed using the Scope.
Two outputs of the 2D setpoint area are used to achieve the following behavior. The value of the "SetPoint_Changed" eventOut is continuously updated when the pointer is over the sensor area. This value is triggered by the second output - "isActive" that is true only on clicking the pointer button. Triggered value - coordinates of the active point on the sensor plane are displayed using the XY Graph and sent back to the virtual world in two ways: as a position of green cone marker and as text that the VR Text Output block displays on the control panel.
vrcrane_traj example is based on the
vrcrane_joystick example, but instead of interactive control, it has
a predefined load trajectory. The
vrcrane_traj model illustrates a
technique to create the visual impression of joining and splitting moving objects in the
A crane magnet attaches the load box, moves it to a different location, then releases
the box and returns to the initial position. This effect is achieved using an additional,
geometrically identical shadow object that is placed as an independent object outside of
the crane objects hierarchy. At any given time, only one of the
Shadow objects is displayed, using two
nodes connected by the
After the crane moves the load to a new position, at the time of the load release, a
VRMLScript script assigns the new shadow object position according to the current
Load position. The
Shadow object becomes visible.
Because it is independent from the rest of the crane moving parts hierarchy, it stays at
its position as the crane moves away.
vrlights example uses light sources. In the scene, you can move Sun
DirectionalLight) and Lamp (modeled as
PointLight) objects around the Simulink model. This movement creates the illusion of changes between day and night,
and night terrain illumination. The associated virtual world 3D file defines several
viewpoints that allow you to observe gradual changes in light from various
vrmaglev example shows the interaction between dynamic models
in the Simulink environment and virtual worlds. The Simulink model represents the HUMUSOFT® CE 152 Magnetic Levitation educational/presentation scale model. The plant
model is controlled by a PID controller with feed-forward to cope with the nonlinearity of
the magnetic levitation system. To more easily observe and control the ball, set the
virtual world viewer to the Camera 3 viewpoint.
You can set the ball position setpoint in two ways:
Using a Signal Generator block
Clicking in the virtual reality scene at a position that you want
To achieve a dragging effect, use the
PlaneSensor attached to the
ball geometry with its output restricted to <0,1> in the vertical coordinate and
processed by the VR Sensor Reader block. The
provides the data connection.
For more details on how to read values from virtual worlds programmatically, see Add Sensors to Virtual Worlds.
In addition to the
vrmaglev example, the
vrmaglev_sldrt example works directly with the actual CE 152 scale
model hardware in real time. This model to work with the HUMUSOFT MF 624 data acquisition board, and Simulink
Coder and Simulink
Desktop Real-Time™ software. However, you can adapt this model for other targets and
acquisition boards. A digital IIR filter, from the DSP System
Toolbox™ library, filters the physical system output. You can bypass the physical
system by using the built-in plant model. Running this model in real time is an example
showing the capabilities of the Simulink product in control systems design and rapid prototyping.
After enabling the remote view in the VR Sink block dialog box, you can control the Simulink model even from another (remote) client computer. This control can be useful for distributing the computing power between a real-time Simulink model running on one machine and the rendering of a virtual reality world on another machine.
To work with this model, use as powerful a machine as possible or split the computing and rendering over two machines.
vrmanipul example illustrates the use of Simulink
3D Animation software for virtual reality prototyping and testing the viability of
designs before the implementation phase. Also, this example illustrates the use of a space
mouse input for manipulating objects in a virtual world. You must have a space mouse input
to run this example.
The virtual reality model represents a nuclear hot chamber manipulator. It is manipulated by a simple Simulink model containing the Space Mouse Input block. This model uses all six degrees of freedom of the space mouse for manipulating the mechanical arm, and uses mouse button 1 to close the grip of the manipulator jaws.
A space mouse is an input device with six degrees of freedom. It is useful for navigating and manipulating objects in a virtual world. A space mouse is also suitable as a general input device for Simulink models. You can use a space mouse for higher performance applications and user comfort. Space mouse input is supported through the Space Mouse Input block, which is included in the Simulink 3D Animation block library for the Simulink environment.
The Space Mouse Input block can operate in three modes to cover the most typical uses of such a device in a three-dimensional context:
vrmanipul_global example illustrates the use of global
coordinates in Simulink
3D Animation models. You can use global coordinates in a model in many ways,
Object tracking and manipulation
Simple collision detection
Simulation of haptic effects
The VR Source block supports using global coordinates for objects in a virtual world.
For each Transform in the scene, the tree view in the VR Source block parameter dialog box
Extensions branch. In that branch, you can select
rotation_abs fields. Fields
_abs suffix contain the object's global coordinates. The
fields without the
_abs suffix input their data into Simulink model object's local coordinates (relative to their parent objects in model
The virtual reality model represents a nuclear hot chamber manipulator. The
manipulator moves the load from one gray cylindrical platform to another. The trajectory
for the manipulator end-effector is predefined using the Signal Builder. Each part of
manipulator arm is independently actuated using decomposed trajectory components, with the
help of VR Expander blocks (see the
VR Transformations subsystem).
The VR Source block in the virtual scene tree on the left captures global coordinates of all objects important for load manipulation:
Manipulator grip reference point (center of the clamp)
Destination reference point
Initial position of the load
The manipulator grip position results from complex movement of manipulator arm parts that form hierarchical structure. Generally it is very difficult to compute global coordinates for such objects affected by hierarchical relations in the scene. However, Simulink 3D Animation provides an easy way to read the global coordinates of objects affected by hierarchical relations into a Simulink model.
Based on having the global coordinates of all of the important objects, you can implement a simple manipulator control logic.
vrmemb1 example is similar to the
example, but in the
vrmemb1 example the associated virtual world is
driven from a Simulink model.
vrmorph example illustrates how you can transfer matrix-type or
variable-size signal data between the Simulink interface and a virtual reality world. With this capability, you can perform
massive color changes or morphing. This model morphs a cube into an octahedron and then
changes it back to a cube.
vr_octavia example illustrates the benefits of the
visualization of complex dynamic model in the virtual reality environment. It also shows
3D Animation 3-D offline animation recording functionality.
This example extends the
vr_octavia example to show multiple-object
The precomputed simulation data represents a standard double-lane-change maneuver conducted in two-vehicle configurations. One configuration engages the Electronic Stability Program control unit. The other configuration switches that control unit off. The example sends two sets of vehicle dynamics data in parallel to the virtual reality scene, to drive two different vehicles.
Models of the vehicles use the
EXTERNPROTO mechanism. In the main
virtual world associated with the VR Sink block, you can create several identical vehicles
as instances of a common 3-D object. This approach greatly simplifies virtual world
authoring. For instance, it is very easy to create a third vehicle to simultaneously
visualize another simulation scenario. The
virtual world, the code after the definition of
PROTOS illustrates an
approach for easy-to-define reusable objects.
In addition to vehicle properties controlled in the
example, vehicle prototypes also allow you to define vehicle color and scale. These
properties distinguish individual car instances (color) and avoid unpleasant visual
interaction of two nearly-aligned 3-D objects (scale). Scaling one of the cars by a small
amount, encompasses one car into another so that their faces do not clip randomly, based
on the current simulation data in each simulation step.
To visualize vehicles side-by-side, add an offset to the position of one vehicle.
vr_octavia_graphs example extends the
vr_octavia example by showing how to combine a virtual reality canvas
in one figure with other graphical user interface objects. In this case, the virtual world
displays three graphs that update at each major simulation time step.
vr_octavia_mirror example extends the
vr_octavia example by showing the capability of the VR Sink block to
process video stream on input. In the virtual world, a
texture map is defined at the point of the vehicle left rear mirror. The example places a
2-D image from a viewpoint at the same position (looking backward). That image is looped
back into the same virtual world and projected on the rear mirror glass, creating the
impression of a live reflection. Texture images can have different formats (corresponding
to the available
SFImage definitions according to the VRML97 standard).
This example uses an RGB image that has the same format as the output from the VR to Video
block. In the virtual world 3D file associated with the scene, you can define only a
trivial texture (in this case, a 4x4 pixel checkerboard) that gets resized during
simulation, according to the current size of the signal on the input. See the Plane
Manipulation Using Space Mouse MATLAB Object example.
vr_octavia_video example illustrates how to use video output
from the VR To Video block. This model performs simple
operations on the video output. It requires the Computer
Vision Toolbox™ product.
vrpend example illustrates the various ways a dynamic model in
the Simulink interface can interact with a virtual reality scene. It is the model of a
two-dimensional inverted pendulum controlled by a PID controller. What distinguishes this
model from common inverted pendulum models are the methods for setting the set point. You
visualize and interact with a virtual world by using a Trajectory Graph and VR Sink
blocks. The Trajectory Graph block allows you to track the history of the pendulum
position and change the set point in three ways:
Mouse — Click and drag a mouse pointer in the Trajectory Graph two-dimensional window
Input Signal — External Trajectory Graph input in this model (driven by a random number generator)
VR Sensor — Activates the input from a VRML TouchSensor
When the pointing device in the virtual world viewer moves over an active TouchSensor area, the cursor shape changes. The triggering logic in this model is set to apply the new set point value with a left mouse button click.
Notice the pseudoorthographic view defined in the associated virtual world 3D file.
You achieve this effect by creating a viewpoint that is located far from the object of
interest with a very narrow view defined by the
An orthographic view is useful for eliminating the panoramic distortion that occurs when
you are using a wide-angle lens. The disadvantage of this technique is that locating the
viewpoint at a distance makes the standard viewer navigation tricky or difficult in some
navigation modes, such as the Examine mode. If you want to navigate around the virtual
pendulum bench, you should use some other viewpoint.
vrplanets example shows the dynamic representation of the first
four planets of the solar system, Moon orbiting around Earth, and Sun itself. The model
uses the real properties of the celestial bodies. Only the relative planet sizes and the
distance between the Earth and the Moon are adjusted, to provide an interesting
Several viewpoints are defined in the virtual world, both static and attached to an
observer on Earth. You can see that the planet bodies are not represented as perfect
spheres. Using the
Sphere graphic primitive, which is rendered this
way, simplified the model. If you want to make the planets more realistic, you could use
the more complex
IndexedFaceSet node type.
Mutual gravity accelerations of the bodies are computed using Simulink matrix-type data support.
vrtkoff example represents a simplified aircraft taking off
from a runway. Several viewpoints are defined in this model, both static and attached to
the plane, allowing you to see the takeoff from various perspectives.
The model shows the technique of combining several objects imported or obtained from
different sources (CAD packages, general 3-D modelers, and so on) into a virtual reality
scene. Usually it is necessary for you to wrap such imported objects with an additional
Transform node. This wrapper allows you to set appropriately the
scaling, position, and orientation of the objects to fit in the scene. In this example,
the aircraft model from the Ligos® V-Realm Builder Object Library is incorporated into the scene. The file
vrtkoff2.wrl uses the same scene with a different type of
vrtkoff_trace is a variant of the
example that illustrates how to trace the trajectory of a moving object (plane) in a
scene. It uses a VR Tracer block. Using a predefined sample time, this block allows you to
place markers at the current position of an object. When the simulation stops, the markers
indicate the trajectory path of the object. This example uses an octahedron as a marker.
vrtkoff_hud example illustrates how to display signal values as
text in the virtual world and a simple Head-Up Display (HUD). It is a variant of the
The example sends the text to a virtual world using the VR Text
Output block. This block formats the input vector using the format string
defined in its mask (see
sprintf for more information) and sends the
resulting string to the
'string' field of the associated
Text node in the scene.
The example achieves HUD behavior (maintaining constant relative position between the
user and the
Text node) by defining a
ProximitySensor. This sensor senses user position and orientation as
it navigates through the scene and routes this information to the translation and rotation
of the HUD object (in this case, a
Transform that contains the
vrcollisions example shows a simple way how to implement
In the virtual world, an X3D
LinePickSensor is defined. This sensor
detects approximate collisions of several rays (modeled as
IndexedLineSet) with arbitrary geometries in the scene. For geometric
primitives, exact collisions are detected. One of
fields is the\\ field, which becomes
TRUE as soon as the collision
between any of the rays and surrounding scene objects is detected.
The robot is inside a room with several obstacles. During the simulation, the robot moves forward as long as its sensor does not bounce into a wall or an obstacle. Use the Left and Right buttons to turn the robot so that there is a free path ahead, and the robot starts moving again.
The model defines both VR Sink and VR Source blocks,
associated with the same virtual scene. The VR Source reads the sensor
isActive signal and the current position of the robot. The VR
Sink block sets the robot position, rotation, and color.
In the virtual world, there are two viewpoints defined - one static and one attached to the robot.
vrcollisions_lidar example shows how a
LinePickSensor can be used to model lidar sensor behavior in
In a simple virtual world, a wheeled robot with a lidar sensor mounted on its top is
defined. This lidar sensor is implemented using the
detects collisions of several rays (modeled as
surrounding scene objects. Sensor
fields are used in this model for visualization purposes only, but together with
robot pose information they can be used for Simultaneous Localization and Mapping (SLAM)
and other similar purposes.
The sensor sensing lines are visible, shown as transparent green lines. There are 51 sensing rays evenly spaced in the horizontal plane between -90 and 90 degrees. lidar range is 10 meters.
In order to visualize the lidar sensor output, there is a visualization proxy
LineSet defined with lines identical to lines defined as the
LinePickSensor sensing geometry. Visualization lines are blue.
LinePickSensor outputs is used to visualize points of collision. The
pickedPoint output contains coordinates of points that collided with
surrounding objects. This output has variable size depending on how many sensor rays
pickedRange output size is fixed, equal to the number of
sensing rays. The output returns distance from lidar sensor origin to collision point for
each sensing line. For rays that don't collide, this output returns -1. The
pickedRange is used to determine the indices of lines for which the
collision points are returned in the
pickedPoint sensor output. In
effect, the blue lines are shortened so that only the line segment between the ray fan
origin and point of collision is displayed for each line.
Robot trajectory is modeled in a trivial way using the Signal Builder and the Ramp blocks. In the Signal Builder, a simple 1x1 meter square trajectory is defined for the first 40 seconds of simulation. After returning to its original position, the robot only rotates indefinitely.
In the model, there are both VR Sink and VR Source blocks defined, associated with the same virtual world. The VR Source is used to read the sensor signals. The VR Sink is used to set the Robot position / rotation and the coordinates of endpoints of the sensor visual proxy lines.
In the virtual world, there are several viewpoints defined, both static and attached to the robot, allowing to observe lidar visualization from different perspectives.
vrmaze example shows how you can use collision detection to
simulate a differential wheeled robot that solves a maze challenge. The robot control
algorithm uses information from virtual ultrasonic sensors that sense distance to
A simple differential wheeled robot is equipped with two virtual ultrasonic
sensors.One of the sensors looks ahead, and the other is directed to the left of the
robot. Sensors are simplified, their active range is represented by green lines. The
sensors are implemented as X3D
LinePickSensor nodes. These sensors
detect approximate collisions of rays (modeled as
arbitrary geometries in the scene. For geometric primitives, exact collisions are
detected. One of the
LinePickSensor output fields is the
isActive field, which becomes
TRUE as soon as the
collision between its ray and surrounding scene objects is detected. When activated, the
sensor lines change their color from green to red using the script written directly in the
In the model, there are both VR Sink and VR Source
blocks defined, associated with the same virtual scene. The VR Source reads
isActive signals. The VR Sink sets the robot
position and rotation in the virtual world.
The robot control algorithm is implemented using a Stateflow® chart.
The following table lists the MATLAB interface examples provided with the software. Descriptions of the examples follow the table. MATLAB interface examples display virtual worlds in your default viewer. If your default is the Simulink 3D Animation Viewer, some buttons are unavailable. In particular, the simulation buttons for simulation and recording are unavailable.
|Text||Recording||vrml() Function Use||Space Mouse|
This example illustrates the use of the Simulink 3D Animation product with the MATLAB interface. In a step-by-step tutorial, it shows commands for navigating a virtual car along a path through the mountains.
In the MATLAB Command Window, type
A tutorial script starts running. Follow the instructions in the MATLAB Command Window.
This example illustrates the use of the Simulink 3D Animation product with the MATLAB interface for manipulating complex objects.
In this example, matrix-type data is transferred between the MATLAB software and a virtual reality world. Using this feature, you can achieve massive color changes or morphing. This is useful for representing various physical processes. Precalculated data of time-based temperature distribution in an L-shaped metal block is used. The data is then sent to the virtual world. This forms an animation with relatively large changes.
This is a step-by-step example. Shown are the following features:
Reshaping the object
Applying the color palette to represent distributed parameters across an object shape
Working with VRML or X3D text objects
Animating a scene using the MATLAB interface
Synchronization of multiple scene properties
At the end of this example, you can preserve the virtual world object in the MATLAB workspace, then save the resulting scene to a corresponding virtual world 3D file or carry out other subsequent operations on it.
This example illustrates the use of the Simulink 3D Animation C interface to create 2-D offline animation files.
You can control the offline animation recording mechanism by setting the relevant
vrfigure object properties. You should
use the Simulink
3D Animation Viewer to record animations. However, direct control of the recording is
This example uses the heat distribution data from the
example to create an animation file. You can later distribute this animation file to be
independently viewed by others. For this kind of visualization, where the static geometry
represented by an
IndexedFaceSet node is colored based on the
simulation of some physical phenomenon, it is suitable to create 2-D
.avi animation files. The software uses a MATLAB
VideoWriter object to record 2-D animation exactly as it appears in the
There are several methods you can use to record animations. In this example, we use
the scheduled recording. When scheduled recording is active, a time frame is recorded into
the animation file with each setting of the virtual world
property. Recording is completed when you set the scene time at the end or outside the
predefined recording interval.
When using the Simulink 3D Animation MATLAB interface, you set the scene time as desired. This is typically from the point of view of the simulated phenomenon equidistant times. This is the most important difference from recording the animations for virtual worlds that are associated with Simulink models, where scene time corresponds directly to the Simulink time.
The scene time can represent any independent quantity along which you want to animate the computed solution.
This is a step-by-step example. Shown are the following features:
Recording 2-D offline animations using the MATLAB interface
Applying the color palette to visualize distributed parameters across an object shape
Animating a scene
Playing the created 2-D animation file using the system AVI player
At the end of this example, the resulting file
remains in the working folder for later use.
vrmemb example shows how to use a 3-D graphic object generated
from the MATLAB environment with the Simulink
3D Animation product. The membrane was generated by the
and saved in the VRML format using the standard
vrml function. You can
save all Handle
Graphics® objects this way and use them with the Simulink
3D Animation software as components of associated virtual worlds.
After starting the example, you see a control panel with two sliders and three check boxes. Use the sliders to rotate and zoom the membrane while you use the check boxes to determine the axis to rotate around.
In the virtual scene, notice the text object. It is a child of the
Billboard node. You can configure this node so that its local
z-axis turns to point to the viewer at all times. This can be
useful for modeling virtual control panels and head-up displays (HUDs).
This example illustrates converting available Digital Elevation Models into the VRML format, for use in virtual reality scenes.
As a source of terrain data, the South San Francisco DEM model (included in the Mapping Toolbox™ software) has been used. A simple Boeing® 747® model is included in the scene to show the technique of creating virtual worlds from several sources on-the-fly.
This example requires the Mapping Toolbox software from MathWorks®.
This example illustrates how to use a space mouse using the MATLAB interface. After you start this example, a virtual world with an aircraft is displayed in the Simulink 3D Animation Viewer. You can navigate the plane in the scene using a space mouse input device. Press button 1 to place a marker at the current plane position.
This example requires a space mouse or compatible device.