Given by Zeynep Ozdemir, Rob Baker, Meryem Ispirli, Geoffrey C. Fox at CPS616 -- Information Track of CPS on Spring Semester 98. Foils prepared March 4 1998
Outside Index
Summary of Material
See VRML Resources including many Examples Specifications and Tutorialsbr> |
See VRML Consortium including VRML97 Specification and News |
What is VRML2.0 and VRML97 |
Status of VRML Browsers |
Features of VRML97 with changes from VRML1.0 -- Deletions and Additions |
VRML97 File Format |
Nodes, Fields, Events, Shapes, Routes, Sensors |
Detailed Discussion of Specific Nodes without going into Scripting issues |
Outside Index Summary of Material
VRML(97 or 2.0) |
The Virtual Reality Modeling Language |
Instructor : Dr. Geoffrey Fox |
Material by Rob Baker, Meryem Ispirli, Zeynep Ozdemir, Wojtek Furmanski |
VRML2.0/ VRML 97 Specification : |
VRML2.0 Resources (NPAC) : |
The Virtual Reality Modeling Language(VRML) is a file format for describing 3D shapes and interactive worlds and objects. It may be used in conjunction with the World Wide Web. |
The most exiting feature of VRML is that it enables you to create dynamic worlds and sensory-rich virtual environments on the internet. |
VRML1.0 Specification was based on the Open Inventor file format from Silicon Graphics and developed by a team. |
VRML2.0 (Moving Worlds) Specification has significantly more interactive capabilities. It was designed by the Silicon Graphics VRML team with contributions from Sony Research and Mitra. |
VRML 2.0 was reviewed by the VRML moderated email discussion group (www-vrml@wired.com), and later adopted and endorsed by most 3D graphics and web browser vendors. |
VRML97 is the name of the ISO VRML standard. It is almost identical to VRML 2.0, but with numerous improvements to the specification document and a few minor functional differences. |
VRML97 was officially approved Jan 26, 1998 by ISO (International Organization for Standardization). |
VRML was conceived in the spring of 1994 at the first annual World Wide Web Conference in Geneva, Switzerland. |
Tim Berners-Lee and Dave Raggett organized a Birds-of-a-Feather (BOF) session to discuss Virtual Reality interfaces to the World Wide Web. |
Several BOF attendees described projects already underway to build three dimensional graphical visualization tools which interoperate with the Web. |
Attendees agreed on the need for these tools to have a common language for specifying 3D scene description and WWW hyperlinks -- an analog of HTML for virtual reality. |
The term Virtual Reality Markup Language (VRML) was coined, and the group resolved to begin specification work after the conference. |
The word 'Markup' was later changed to 'Modeling' to reflect the graphical nature of VRML. |
VRML1 defines scene graphs which are defined hierarchically as a set of nodes |
Nodes fall into one of 36 types arranged into broad classes -- shape, geometry/material, transformation, camera, lighting, group and WWWInline |
The state of a graph (color, position, rotation etc.) are "advanced" (e.g. transformation matrices are multiplied) when each node is encountered
|
Node instances can be given names to allow them to be replicated or otherwise referenced |
Nodes have attributes (properties or parameters if you prefer) which are called fields |
Fields have names for specific node types which have mnemonic value |
However there are a set of global field types ( single or multiple(vector), floating or integer etc.) and each field is assigned a particular type and this implies format that must be used |
After an Interesting "competition", a new standard VRML2.0 was adopted in May 1996. After some uncertainty, it appears that this standard will be accepted by major players
|
Conceptually it is in most areas similar to VRML1.0 but there are many changes in detail and VRML1.0 is NOT a subset of VRML2.0 (MOST of language is changed) |
Major goal of VRML2.0 was to support dynamic 3D worlds where components of world move relative to each other and interact together
|
CosmoPlayer - Silicon Graphics
|
WorldView - Intervista Software
|
Liquid Reality - DimensionX
|
Community Place - Sony
|
Figure 1: Conceptual model of a VRML browser |
The three main components of the browser are: |
Parser, Scene Graph, and Audio/Visual Presentation.
|
User input generally affects sensors and navigation, and thus is wired to the ROUTE Graph component (sensors) and the Audio/Visual Presentation component (navigation). |
The Audio/Visual Presentation component performs the graphics and audio rendering of the Transform Hierarchy that feeds back to the user. |
VRML allows you to do
|
A VRML file is a textural description of your VRML world. VRML file consists of the following parts
|
The VRML file header is the only required part. This header should be the first line of the file. |
#VRML V2.0 utf8
|
Any lines begining with # are comment lines. |
#VRML V2.0 utf8 |
Transform { # Completes on later foil |
children [ # Closing ] two foils later! |
NavigationInfo { headlight FALSE } |
# We'll add our own light |
DirectionalLight { # First child |
direction 0 0 -1 # Light illuminating the scene |
} |
Transform { # Second child - a red sphere |
translation 3 0 1 |
children [ |
Shape { geometry Sphere { radius 2.3 } |
appearance Appearance { |
material Material { diffuseColor 1 0 0 } # Red |
} |
} |
] |
} |
Transform { # Third child - a blue box |
translation -2.4 .2 1 |
rotation 0 1 1 .9 |
children [ |
Shape { geometry Box {} |
appearance Appearance { |
material Material { diffuseColor 0 0 1 } # Blue } |
} |
] |
} |
] # end of children for world from two foils earlier |
} # ends grouping from two foils earlier |
[simpleworld.wrl] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/simpleworld.wrl |
A VRML file contains nodes that describe shapes and their properties in your 3D world. By using these properties it is possible to describe shapes(box,cylinder,sphere,etc.), colors, lights, viewpoints, animation timers, sensors, interpolators, etc. |
Nodes generally contains the following properties
|
Each node has attributes. |
Attributes can be Field or Event. |
Both of them also have a Field Type. |
There are two kinds of fields: field and exposedField. |
fields define the initial values for the node's state, but cannot be changed and are considered private. |
ExposedFields also define the initial value for the node's state, but are public and may be modified by other nodes. |
Each exposedField attribute has an associated implicit set_ eventIn and _changed eventOut so that exposed fields may be connected using ROUTE statements, and may be read and/or written by Script nodes |
exposedField foo is equivalent to:
|
Each node has specific fields and events
|
Figure 2 : Fields and Events |
Fields define the attributes of a node. |
Fields are optional within nodes since each field has a default value that is used by your VRML browser if you don not specify a value. |
Field values define attributes like color, size, or position, and every value is of a specific field type, which describes the kind of values allowed in the field. |
SFBool : Boolean or logical values. TRUE or FALSE |
SFColor/MFColor : A group of three floating point values such as SFFloat/MFFloat. Describes the red, green, and blue component of the color. Values should be between 0 and 1. |
SFFloat/MFFloat : Floating-point values. Examples: -90.99, 89.0.
|
SFImage : A list of values describing the colors of a digital picture. It can be used to create a surface texture to color the shape. |
SFInt32/MFInt32 : 32-bit integer values. |
SFNode/MFNode : A VRML node such as Box. |
SFRotation/MFRotation : A group of four floating values. The first three values define a rotation axis. The last value is a rotation angle measured in radians. This helps to define the orientation of a shape. |
SFString/MFString : A list of characters enclosed within quotation marks("). |
SFTime : A floating-point value. Gives a real world absolute time, measured in seconds since 12.00 midnight, GMT, January 1, 1970. It allows to define the start time and end time of any animation. |
SFVec2f/MFVec2f : 2-D floating point vector. This allows to specify a 2-D position. |
SFVec3f/MFVec3f : 3-D floating point vector. This allows to specify a 3-D position. |
Nodes can receive a number of incoming set_ events, denoted as eventIn, (such as set_position, set_color, and set_on), which typically change the node. Incoming events are data messages sent by other nodes to change some state within the receiving node. |
Nodes can also send out a number of _changed events, denoted as eventOut, which indicate that something in the node has changed (for example, position_changed, color_changed, on_changed). |
The exposedField keyword may be used as a short-hand for specifying that a given field has a set_ eventIn that is directly wired to a field value and a _changed eventOut. |
The rules for naming fields, exposedFields, eventOuts and eventIns for the built-in nodes are as follows:
|
Nodes can be named, and used repeatedly
|
DEF node-name node-type {}
|
USE node-name
|
VRML uses a cartesian, right-handed, 3-dimensional coordinate system. |
By default, objects are projected onto a 2-dimensional device by projecting them in the direction of the positive Z axis, with the positive X axis to the right and the positive Y axis up. |
A camera or modeling transformation may be used to alter this default projection. |
The standard unit for lengths and distances specified is meters. |
The standard unit for angles is radians. |
Hierarchical file inclusion enables the creation of arbitrarily large, dynamic worlds. |
Therefore, VRML ensures that each file is completely described by the objects and files contained within it and that the effects of each file are strictly scoped by the file and the spatial limits of the objects defined in the file. |
Otherwise, the accumulation of files into larger worlds would produce unscalable results (as each added world produces global effects on all other worlds).
|
VRML1.0 allowed content developers to create static 3D worlds which contained links to web content. |
VRML2.0 builds on the geometric foundation of VRML 1.0 with additional nodes in geometry and the addition of three basic concepts:
|
The node types are listed by category: |
Grouping Nodes |
Browser Information |
Lights and Lighting |
Sound |
Shapes |
There are also some Concepts deleted from VRML1 |
Geometry |
Appearance |
Geometric Sensors |
Interpolation |
Scripting Nodes |
Grouping Nodes include |
Collision
|
Transform
|
In place of the old Info node type, VRML 2.0 provides several
|
Background
|
NavigationInfo
|
Viewpoint
|
WorldInfo
|
New Lights and Lighting Nodes |
Fog
|
Sound Nodes (None in VRML1.0) |
Sound
|
AudioClip
|
Shapes |
Shape
|
Geometry |
ElevationGrid
|
Extrusion
|
Text
|
Geometric Properties |
Color
|
Coordinate (was Coordinate3 in VRML1.0) Normal and TextureCoordinate are also in this category |
New class of Appearance Nodes |
Appearance
|
Scripting Nodes |
Script
|
These generate events when certain things happen -- typically corresponding to user actions or to certain timing constraints |
ProximitySensor
|
TouchSensor
|
CylinderSensor
|
PlaneSensor
|
SphereSensor
|
VisibilitySensor
|
TimeSensor
|
These are a critical set of nodes to generate animations with smooth movement with attributes moving through a linearly interpolated set of values |
ColorInterpolator
|
CoordinateInterpolator
|
NormalInterpolator
|
OrientationInterpolator
|
PositionInterpolator
|
ScalarInterpolator
|
In addition to other changes from VRML1, VRML 2.0 introduces a couple of new field types:
|
SFEnum (use SFString) and SFMatrix from VRML1.0 are deleted |
The following VRML 1.0 node types have been removed from VRML2.0:
|
Separator : use Transform instead |
Transformation nodes : incorporated as fields of a Transform node (this was always possible in VRML1.0)
|
Sensors provide mechanisms for the user to interact with objects in the world. |
Functionally, they generate events based on time or on user action. |
ProximitySensor |
SphereSensor |
TouchSensor |
VisibilitySensor |
CylinderSensor |
TimeSensor |
PlaneSensor |
Scripts and Interpolators are the way you breathe life into the VRML 2.0 world. |
Interpolators are a built-in behavior mechanism that generate output based on linear interpolation of a number of key data points.
|
Scripts are used to create more intricate forms of behavior. Script nodes themselves contain source code or pointers to code for logic operations. |
Authors can feed data into a Script, have the script analyze the input and output an event to change the world. |
Interpolators
|
Script Languages
|
normal |
orientation |
scalar |
Java |
3D Spatialized Audio for added realism and increased immersion. In the virtual world. Smart use of sound can have a significant impact on the user experience. |
Keep in mind that VRML supports MIDI format for background sound tracks and that small .wav files can be triggered randomly by a script node to create ambient effects. |
VRML 2.0 has new geometry nodes to enable more compact descriptions of complex virtual worlds. These nodes and use of the DEF/USE notation are crucial to making interesting worlds.
|
In VRML, the basic building blocks are shapes described by nodes and their accompanying fields and field values. |
A shape has
|
VRML supports several types of primitive shape geometries, such as box, cylinder, cone, sphere, extruded shape and elevation grid. |
Shape node are built centered around the origin(0,0,0).
|
Box { size 2.0 2.0 2.0 # defined a box } |
Cone { bottomRadius 1.0 # field SFFloat height 2.0 # field SFFloat side TRUE # field SFBool bottom TRUE # field SFBool |
} |
Sphere {radius 1.0 # field SFFloat } |
Cylinder { height 2.0 # field SFFloat |
radius 1.5 # field SFFloat |
} |
#VRML v2.0 utf8 Shape { appearance Appearance { material Material {} # use default } geometry Box {} # use default Box definition } |
This VRML file creates a box in the screen. |
#VRML v2.0 utf8 Shape { appearance Appearance { material Material {} } geometry Box { size 1.0 3.0 5.0 } } |
This VRML file creates a box in the screen but this time its width in the x-axis is 1, on the y-axis 3 and on the z-axis 5. |
Shapes can be grouped together to construct larger, more complex shapes. |
The node that groups together the group's shapes is called parent. |
The shapes that make up the group are called the group's children. |
A group can have any number of children. |
Groups can even have other groups as children.(nested) |
Group Nodes : Anchor, Billboard, Collision, Group, and Transform. |
LOD Level of Detail is also a Grouping Node and has an extra level field compared to VRML1 |
[ prim.wrl ] Note Inline |
of |
pedestal.wrl |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/prim.wrl |
Group { children [ |
Shape { . . . }, |
Shape { . . . }, |
. . . |
] # exposedField MFNode bboxCenter 0.0 0.0 0.0 # field SFVect3f bboxSize -1.0 -1.0 -1.0 # field SFVect3f addChildren # eventIn MFNode removeChildren # eventIn MFNode } |
#VRML v2.0 utf8 Group { children [ Shape { appearance DEF White Appearance { material Material {} } geometry Box {size 1.0 3.0 5.0 } }, Shape { appearance USE White geometry Box {size 1.0 3.0 5.0} } ] } |
The Billboard is a group node. |
The billboard coordinate system is automatically rotated so that the shapes in the group as a unit always turn to face the viewer, even as the viewer moves around the group. This effect is called as billboarding. |
This can be used to define an advertisement billboard which always turn to face the viewer.
|
[ robobill.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/robobill.wrl |
Events and Routes allow to make the VRML world dynamic such as clicking on a shape with a mouse can turn on a light, trigger a sound, or start up a machine. |
To implement this, the events should be wired. One of them (eventOut) produces the event and the other one (eventIn) consumes it by implementing the necessary operation in the world. |
Events are fields within the node. The sender and receiver nodes should has the same field type. |
The connection between the node generating the event and the node receiving the event is called a Route. |
Once a sensor or Script has generated an initial event, the event is propagated from the eventOut producing the event along any ROUTEs to other nodes.
|
Some sensors generate multiple events simultaneously. In these cases, each event generated initiates a different event cascade with identical timestamps. |
ROUTE allows you to wire two nodes via events. |
ROUTE NodeName.eventOutName_changed TO NodeName.set_eventInName |
Routes may be established only from eventOuts to eventIns. |
Routes are not nodes; ROUTE is merely a syntactic construct for establishing event paths between nodes. |
Use DEF to give a Node a name! |
ROUTE statements may appear at either the top-level of a .wrl file or prototype implementation, or may appear inside a node wherever fields may appear. |
The types of the eventIn and the eventOut must match exactly.
|
Sensor nodes generate events. |
Geometric sensor nodes (ProximitySensor, VisibilitySensor, TouchSensor, CylinderSensor, PlaneSensor, SphereSensor and the Collision group Node) generate events based on user actions, such as a mouse click(drag, over) or navigating close to a particular object. |
TimeSensor nodes generate events as time passes. |
Once a Sensor or Script has generated an initial event, the event is propagated along any ROUTE's to other nodes.
|
Prototyping is a mechanism that allows the set of node types to be extended by the user from within a VRML file. It allows the encapsulation and parameterization of geometry, attributes, behaviors, or some combination thereof. |
PROTO ProtoName
|
The prototype declaration contains:
|
The prototype definition contains a list of one or more nodes, and zero or more routes and prototypes. |
The first node in definition is used to define node type
|
Following nodes can be used in routes and scripts but are NOT rendered
|
The nodes in this declaration list may also contain the IS syntax associates field and event names contained within the prototype definition with the events and fields names in the prototype declaration. |
[ prototype.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/prototype.wrl |
Each eventIn in the prototype declaration is associated with an eventIn or exposedField defined in the prototype's node definition via the IS syntax. |
PROTO FooTransform |
[ eventIn SFVec3f set_position ] |
{ |
Transform { set_translation IS set_position } |
} |
This definition exposes a Transform node's set_translation event by giving it a new name (set_position) in the prototype interface. |
The Transform node allows you to position shapes and groups of shapes anywhere in your world by using its translation field. Every transform node defines a new coordinate system within its parent's coordinate system.
|
#VRML v2.0 utf8 Transform { translation 0.0 2.0 0.0 children [ Shape { appearance Appearance { material Material {} } geometry Cylinder {} ] } |
Creates a Cylinder and puts its origin at (0.0,2.0,0.0) |
rotation field of the Transform node provide the rotation operation. |
First three values defines the rotation axis which is an imaginary line about which a coordinate system is rotated.
|
For example to rotate along the X-axis use 1.0 0.0 0.0,
|
(Assuming that center is 0.0 0.0 0.0) this defines the rotation axis. |
The last value in the rotation field specifies the rotation angle in terms of radians. |
The rotation center can be specified by changing the center field of the Transform node. |
#VRML V2.0 utf8 |
Group { |
children [ |
DEF Arm1 Shape { # Arm1 |
appearance Appearance { |
material Material { } |
} |
geometry Cylinder { |
height 1.0 |
radius 0.1 |
} }, |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/rotation.wrl |
[ rotation.wrl ] |
Transform { # Arm2 |
rotation 1.0 0.0 0.0 1.047 |
children USE Arm1 |
}, |
Transform { # Arm3 |
rotation 1.0 0.0 0.0 2.094 |
children USE Arm1 |
} |
] |
} |
scale and scaleOrientation field of the Transform node allows to scale shapes and groups of shapes to any size. |
scale field has three values and each of them defines the appropriate scale factor on the corresponding axis.This allows to warp the shapes. |
scaleOrientation field allows to specify an orientation for the scaling. When it is specified, before the scaling operation the coornidate system is rotated and after the scale operation the coordinate system will be rotated back so that coordinate system remains unchanged. This orientation change enables to scale shapes diagonally instead of just horizontally, vertically, and front to back. |
#VRML V2.0 utf8 |
children [ |
# Ground |
Shape { |
appearance DEF White |
Appearance { |
material Material { } |
} |
geometry Box { |
size 12.0 0.1 12.0 |
} |
}, |
# A tree scaled along a diagonal axis using scale orientation. |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/scaledtree.wrl |
Transform { # Tree |
translation 0.0 1.0 0.0 |
scale 1.0 2.0 1.0 |
scaleOrientation 0.0 0.0 1.0 -0.785 |
center 0.0 -1.0 0.0 |
children [ # Trunk |
Shape { appearance USE White |
geometry Cylinder { radius 0.5 |
height 2.0 |
} |
}, |
Transform { # Branches |
translation 0.0 3.0 0.0 |
children Shape { appearance USE White |
geometry Cone { bottomRadius 2.0 |
height 4.0 |
} |
} |
} |
] |
} |
] |
} |
It is possible to define the appearance of each shape by specifying the atributes of the material. Appearance and Material nodes allows to define this appearance.
|
VRML simplifies the simulation of real world lighting by ignoring reflections and assuming light comes either from a specific source or is "ambient". The latter is the sum of all complex lighting!
|
In the real world when light ray strike a surface, some is absorbed by it, some is reflected off it, and some is transmitted through the surface. |
The light that reflects or is transmitted through a surface continuous into the world, bouncing from surface to surface to illimunate the scene. |
To simplify the complex lighting operations, VRML uses a simplified methods. It supports two types of reflection: diffuse reflection and specular reflection. |
transparency specifies the amount of tranparency as a factor between 0.0 and 1.0. A value 0.0 indicates that the corresponding shape is opaque and a value 1.0 indicates that the shape is completely transparent. |
Diffuse reflection bounces light off a surface in random directions. Since rays bounces in a random directions, there will be a gentle and diffuse light on a shape without any glints, sparkles and highlights. |
diffuseColor controls the RGB color of the light reflected by diffuse reflection. |
specularColor field specifies the color of a specular reflection highlight on the surface used when light rays are not bent through large angles. |
Specular reflection bounces light off a surface in a mathematically predictable way that makes shiny surface reflect the world around them. |
shininess specifies the shininess of the material.
|
Ambient light is a light that has bounced from surface to surface many times and provides a general, overall illumination for a room. |
ambientIntensity specifies how shape respond to ambient light. This value varies between 0 and 1. The higher the ambient intensity of shape is, the more sensitive to ambient light the shape is. |
emissiveColor specifies the glow as an RGB color. Brighter emissive colors make shapes appear to emit more light so that they glow more brightly.
|
[ slabs.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/slabs.wrl |
Animation is the change of something as time progress such as change in position, scale, orientation, etc.. |
Any animation needs two elements:
|
An animation which is independent of the exact date and time at which it is played back can be described by fractional time.
|
Animation descriptions use a technique called keyframe animation where a position or rotation is specified for only a few key fractional times.
|
A PositionInterpolator node allow to define a list of key fractional times and key values to describe an animation. |
Key fractional times are listed as values in an interpolator node's key field, while key values are listed in the node's keyValue field.
|
A simple animation can be implemented by the following routings. The fraction_changed eventOut of a TimeSensor node can be wired to a set_fraction eventIn of interpolator. |
Whenever interpolator receives a set_fraction event, it computes a new position value (according to keyValues) and places it in a value_changed eventOut. |
By wiring this eventOut which holds a 3-vector to the Transform node's appropriate eventIn attribute, the animation can be implemented. |
[ doorway.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/doorway.wrl |
A ProximitySensor routes to four TimeSensors, one per sliding stair. (see later for these nodes) |
Each stair's timer routes to a PositionInterpolator that outputs varying positions side-to-side along the X axis. |
The PositionInterpolator's positions are routed into translation fields for Transform nodes surrounding each stair. |
By using different starting positions, the four stairs are made to slide back and forth in a pattern. |
DEF Stair1 Transform { # Label each stair |
translation 0.0 0.0 0.0 # Will change |
# this translation's value |
children [ # name Stair so can re-use |
DEF Stair Inline { |
url "tread.wrl" # This wrl holds |
}, # details of stairs |
] |
}, |
For each stair, a TimeSensor (Stair1Timer on next foil) is triggered by a world ProximitySensor. (set when you enter bounding box) |
Once triggered, the time sensor outputs fractions forever. |
The fractions are routed in to PositionInterpolators (Stair1Path) that smoothly interpolate a sliding position back and forth along the X axis. |
These positions are routed in to translations (Stair1.set_translation) for the individual stairs. |
DEF Stair1Timer TimeSensor { |
cycleInterval 4.0 |
loop TRUE |
} |
DEF Stair1Path PositionInterpolator { |
key [ 0.0, 0.25, 0.5, 0.75, 1.0 ] |
keyValue [ 0.0 0.0 0.0, 3.0 0.0 0.0, 0.0 0.0 0.0, -3.0 0.0 0.0, 0.0 0.0 0.0 ] |
} |
ROUTE Proximity.enterTime TO Stair1Timer.set_startTime |
ROUTE Stair1Timer.fraction_changed TO Stair1Path.set_fraction |
ROUTE Stair1Path.value_changed TO Stair1.set_translation |
The PositionInterpolator and OrientationInterpolator nodes both use linear interpolation to compute intermediate values between the given key values.
|
ColorInterpolator { set_fraction # eventIn SFFloat key [] # exposedField MFFloat keyValue [] # exposedField MFColor value_changed # eventOut SFColor } |
# Interpolates a RGB color field |
# The number of keyvalues must be equal to number of keys |
ScalarInterpolator { set_fraction # eventIn SFFloat key [] # exposedField MFFloat keyValue [] # exposedField MFFloat value_changed # eventOut SFFloat } |
# Interpolates a single floating point number |
# The number of keyvalues must be equal to number of keys |
NormalInterpolator node can be used to animate normal vectors in a Normal node which contains sets of normals used in IndexedFaceSet and ElevationGrid.
|
The number of keyvalues must be a multiple of number of keys |
The Normal Interpolation is performed along arcs joining keyValues on surface of a sphere |
A sensor node senses the user actions with a pointing device, such as mouse. |
There are two different pointing device types: 2-D pointing devices and 3-D pointing devices.
|
Browser observes three types of action from the pointing device
|
A TimeSensor node can create a clock tick. It can be started or stopped at a specific date and time by providing absolute times for the node's startTime and stopTime fields.
|
isActive field shows whether TimeSensor node is active or not. |
It outputs fractional times on its fraction_changed eventOut. |
The length of time between fractional time 0.0 and 1.0 can be specified by cycleInterval field. |
Repetition of the same animation continuously can be requested by setting loop field of node. |
When the loop is running, cycleTime outputs the time at which each new cycle starts over so that this eventOut might allow to synchronize several animations in the world. |
A TouchSensor node can be added to any group. It senses Click and Move actions. When the viewer's cursor moves over the sensed group ( a VRML group containing a sensor node) of shapes, isOver eventOut of TouchSensor produces output as a TRUE. As long as, a cursor is over the group of shape, it stays at TRUE level. |
When the viewer produces click action, isActive eventOut of TouchSensor produces TRUE and it will produce FALSE as soon as click action ends. |
TouchSensor { enabled TRUE # exposedField SFBool hitNormal_changed # eventOut SFVec3f hitPoint_changed # eventOut SFVec3f hitTexCoord_changed # eventOut SFVec2f isActive # eventOut SFBool isOver # eventOut SFBool touchTime # eventOut SFTime } |
The PlaneSensor node senses viewer drag actions, computes the translation distances and outputs these using its translation_changed eventOut. |
By wiring translation_changed eventOut of PlaneSensor to the set_translation eventIn of Transform node, it is possible to let viewer drag a shape in the world.
|
The SphereSensor node senses viewer drag actions, computes rotation axis and angles, and output these using rotation_changed eventOut. |
By wiring rotation_changed eventOut of SphereSensor to the set_rotation eventIn of Transform node, it is possible to let viewer rotate the shape in any direction in the world.
|
The CylinderSensor node senses viewer drag actions, computes rotation axis and angles, and output these using rotation_changed eventOut. |
By using this sensor, it is possible to define a shape, like disk, than can be turned via draging the mouse.
|
Visibility sensors sense whether a box-shaped region in the VRML world is visible from the viewer's position and orientation so that it is possible to control the animation or actions that are only necessary when a sensed region can be seen.
|
center specifies the 3-D coordinate at the center of a sensed region. |
size field defines the dimensions of a sensor's box-shaped region a width, a height, and a depth measured along the X, Y, and Z axes of the current coordinate system. |
isActive eventOut flags whether any part of sensed region is consrained within the viewing-volume frustrum. |
enabled field enables and disables the VisibilitySensor. |
enterTime event is generated whenever the isActive TRUE event is generated. |
exitTime events are generated whenever isActive FALSE events are generated. |
Proximity sensors sense when a viewer enters and moves about within a box-shaped region in the VRML world.
|
center, size, enabled, isActive, enterTime and exitTime fields have the same definition we gave in the VisibilitySensor. |
position_changed and orientation_changed events occurs whenever the position and orientation of the viewer changes with respect to the ProximitySensor's coordinate system. |
All objects in the scene are collidable. The browser detects geometric collisions between the user's avatar (see NavigationInfo) and the scene's geometry, and prevent the avatar from entering the geometry. |
Collision detection detects when the user comes close to and collides with the shapes in the VRML world.
|
collide field enables and disables collision detection. |
The collideTime eventOut generates an event specifying the time when the user's avatar (see NavigationInfo) intersects the collidable children or proxy of the Collision node. |
proxy is a legal child node that is used as a substitute for the Collision's children during collision detection
|
bboxCenter and bboxSize fields specify a bounding box that encloses the Collision's children. |
When multiple sensor nodes are siblings in the same group, they all simultaneously sense viewer actions on the same group of shapes. |
For instance, it is possible to use a TouchSensor node to detect when the viewer's cursor has moved over a shape, and a PlaneSensor node to enable the viewer to drag the shape around. |
When multiple sensors are in different groups, and those groups are nested within each other, then the sensor that is in the innermost nested group overrides those in outer groups. |
Inlining enables to build each of the shapes for the world in a separate VRML file so that we can independently build the smaller parts of the world and combine them without putting everything into one file. Inline node provide this service. |
url field defines a list of URL addresses and browser go through these URL addresses until it finds something to open or it reaches end of list.
|
[ chair.wrl ] |
[ table.wrl ] |
[ dinette.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/dinette.wrl |
Make up using Inline the |
Anchor grouping node provides this service so that it is possible to connect a door in the VRML world to a destination VRML world described elsewhere on Web. |
The node's url field specifies the URL address of the corresponding VRML Web page to which the viewer travels when the viewer clicks on the anchor shape.
|
[ stairway.wrl ] |
[ Anchor.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/anchor.wrl |
Text Shapes build 2D text in a VRML world
|
Shape { geometry
|
} |
Text strings list the text to build in string field |
Each entry in string vector generates a single line or column (depending on fontstyle field horizontal field value)
|
Note Text is viewed as being in "geometry" class |
The Text node builds flat (not 3D !) text characters in X-Y plane with a Z axis depth of 0.0. |
length field specifies the desired length, in VRML units, of each line of text. To match the specified length, lines or columns of text are composed or expanded by changing their character size or character spacing. length=0 implies no length constraint |
maxExtent field specifies the maximum permissible length, in VRML units, of any line or column of text.
|
fontStyle field specifies the characteristics defining the look and alignment of text created by the Text node. |
FontStyle { |
family "SERIF" # field SFString |
horizontal TRUE # field SFBool |
justify "BEGIN" # field MFString |
language "" # field SFString |
leftToRight TRUE # field SFBool |
size 1.0 # field SFFloat |
spacing 1.0 # field SFFloat |
style "PLAIN" # field SFString |
topToBottom TRUE # field SFBool |
} |
style field specifies the text style to use such as PLAIN, BOLD, ITALIC, and BOLDITALIC. |
family field indicates which of the standart VRML font families to use such as SERIF, SANS, and TYPEWRITER. |
size field specifies the height of the characters measured in VRML units. |
spacing field specifies the vertical line spacing, in VRML units, of horizontal text or horizontal column spacing of vertical text. |
horizontal field specifies whether text strings are built horizontally (rows) or vertically (columns). |
justify field determines alignment of the above text layout relative to the origin of the object coordinate system. The possible values for each enumerant of the justify field are "FIRST", "BEGIN", "MIDDLE", and "END". |
language field specifies the context of the language for the text string. |
Lights in a VRML world serve the same purpose as lights in real world. VRML provides three node types to control lighting: PointLight, DirectionalLight, and SpotLight. |
A light can be placed anywhere in the VRML world but a viewer cannot see any kind of geometrical indication related to that light. A viewer can of course see the effect of that light in the VRML world. |
Note lights only affect a Shape which has a non NULL Appearance node and a non NULL Material node within Appearance node |
ambientIntensity reflects general affect of reflections as an approximation to exact rendering |
A PointLight is a light located in a VRML world that emanates light in a radial pattern in all directions, as if the light was all coming from a single point.
|
location specifies the 3-D coordinate of the light. |
radius defines the maximum reach of light. A shape that is far away from the light will not show the effect of this light. |
intensity controls the brightness of the light source. It can have value between 0 and 1. To turn off the light, set its intensity to zero. |
ambientIntensity field controls the effect of the light source has on the ambient-light level for shapes within the light's radius. |
color specifies a RGB color for the light. |
attenuation field controls how the light source's brightness fades as one moves away from center (position of PointLight) of sphere of influence. For a shape coordinate within this sphere at a distance d from the light's location, the light color brightness at that shape coordinate is determined as follows:
|
A DirectionalLight is a light aimed at the world from infinitely far away so that all of its light rays are parallel and point in the same direction
|
direction field defines a 3-D vector which indicates the direction for the light rays. |
A SpotLight is a light placed at a location in the VRML world, centered in a specific direction, and constrained so that its rays emanate within a light cone. Shapes that fall in within cone of light are illuminated by the spotlight, and others are not.
|
cutOffAngle field specifies the spread angle of a cone of illumination whose tip is at the spotlight's location and whose cone axis is aligned parallel with the spotlight's aim direction. Increasing the cutOffAngle field's value widens the spread of the cone, while decreasing the angle narrows the cone of illumination. |
beamWidth field specifies an inner solid angle in which the light source emits light at uniform full intensity. If beamWidth > cutOffAngle, then beamWidth is assumed to be equal to cutOffAngle and the light source emits full intensity within the entire solid angle defined by cutOffAngle. |
Both beamWidth and cutOffAngle must be greater than 0.0 and less than or equal to PI/2 radians. |
A VRML point is a dot located within a VRML world. To locate the exact location of a point, it is necessary to give the coordinate of that point. Coordinate node allow to define 3-D coordinates:
|
Coordinate node can be used any coord field of PointSet, IndexedLineSet, or IndexedFaceSet node. |
Each value of point field contains three floating-point number. |
The first value in the point array is indexed as 0th component. |
set_point and point_changed events are implied because of exposedField definition. These allow one to move/animate points |
A PointSet is a collection of points and their colors. PointSet node allows one to define this matchup with:
|
coord field of the node holds the coordinate of points specified with a Coordinate node. |
color field of node holds the color of each point. Color node should be used to define these colors.
|
A VRML line is a straight line connecting to end points, each one located in the world. |
VRML allows to define a polyline which has a starting coordinate, several intermediate coordinates, and an ending coordinate. The VRML browser draws a straight line from the strarting coordinate to the first intermediate coordinate, from there to the second intermediate coordinate, and so on, finishing the polyline or the ending coordinate.
|
IndexedLineSet defines a geometrical structure as a set of lines and may be used for the geometry field of a Shape node. It has color, coord, colorIndex, colorPerVertex and coordIndex fields |
coord holds the set of coordinates, |
coordIndex defines the path to connect the coordinates given in the coord field to build a polyline. Each value in the coordIndex is an integer index specifying a coordinate from coord field.
|
color field hold the color and typically it hold a Color node. |
If colorPerVertex field is TRUE, the colorIndex field defines the color of each point (vertex) by matching the color with the corresponding coordinate(or point). colorIndex must have -1's where coordIndex does and values of colorIndex point to color array |
If colorPerVertex field has a value FALSE, then indices of colorIndex correspond to indices of polylines but again values are pointers to color array. |
The major difference between IndexedFaceSet and IndexedLineSet is that VRML browser automatically connects the starting coordinate and ending coordinate of IndexedFaceSet to obtain a closed polyline and fills this area and shades it to make it look solid. |
IndexedFaceSet geometry node defines a set of such faces to create (the surface of) a general object of arbitrary shape and this node can be used in geometry field of a Shape node. Notice this is usual way in computer graphics of specifying objects
|
Shape { |
geometry IndexedFaceSet { |
coordIndex [ 0,4,3,-1, 0,3,2,-1 0,2,1,-1, 0,1,4,-1, 1,3,4,-1, 1,2,3,-1] |
coord Coordinate { |
point [ 0 5 0, -2.5 0 -2.5, 2.5 0 -2.5, 2.5 0 2.5, -2.5 0 2.5] } |
color Color { |
color [ 1 1 0, 0 1 1, 1 0 1, 0 1 0, 0 0 1, 1 1 0, 0 1 1, |
1 0 1, 1 0 0, 0 1 0, 0 0 1, 1 1 0, 0 1 1, 1 0 1, 1 0 0, 0 1 0, 0 0 1 ] } } |
appearance Appearance { |
material Material { diffuseColor 1 1 0 ambientIntensity 0.1 }} } |
Defines a Simple Colored Pyramid |
coord Coordinate { |
point [ 0 5 0, # Called Point 0 |
-2.5 0 -2.5, # Called Point 1 |
2.5 0 -2.5, # Called Point 2 2.5 0 2.5, # Called Point 3 -2.5 0 2.5 # Called Point 4 |
] } |
The point field defines the vertices of the object |
There are five points defined. The points are labeled from 0 to 4 |
Coordinate just defines points to be used later in IndexedFaceSet |
geometry IndexedFaceSet { |
coordIndex [ 0,4,3,-1, 0,3,2,-1 0,2,1,-1, 0,1,4,-1, 1,3,4,-1, 1,2,3,-1] |
The coordIndex field has a list of vertex numbers which define triangles (indicated by terminating -1) |
This implies square at bottom of Pyramid broken into 2 triangles |
Vertex numbers are those whose position was specified in coord field |
IndexedFaceSet {
|
ccw TRUE # field SFBool |
creaseAngle 0 # field SFFloat |
normalIndex [] # field MFInt32 |
normalPerVertex TRUE # field SFBool |
solid TRUE # field SFBool |
texCoordIndex [] # field MFInt32 |
} |
coord, coordIndex, color field have the same role as in IndexedLineSet. |
If colorPerVertex is FALSE, then the colors from the Color node are used for each face. Otherwise, colors define a color of each vertex and color of face is calculated by averaging the color of its vertices. |
ccw convex solid creaseangle are shared by ElevationGrid Extrusion and IndexedFaceSet and are used to speed up and clarify rendering strategy for solids |
ccw field defines whether the faces in the face set are indexed in counter clockwise order or not. (ignored if solid true) |
convex field defines whether all the faces in the face set are convex or not. If faces are concave, then VRML splits them into convex faces before drawing them. |
solid field if TRUE indicates that Shape encloses a volume and one can use so-called backface culling |
creaseAngle field defines a crease angle threshold (in radians). Adjacent faces with an angle between them smaller than the crease-angle threshold are smoothly shaded to obscure the edge between them. |
Color {color [] # exposedField MFColor } |
Each element of color field contains three floating point number to define the RGB color. Elements are indexed beginning from the 0. |
Points' colors can be set by Color node, by Material node, or both according to the following rules:
|
ElevationGrid { set_height # eventIn MFFloat color NULL # exposedField SFNode normal NULL # exposedField SFNode texCoord NULL # exposedField SFNode height [] # field MFFloat ccw TRUE # field SFBool colorPerVertex TRUE # field SFBool creaseAngle 0 # field SFFloat normalPerVertex TRUE # field SFBool solid TRUE # field SFBool xDimension 0 # field SFInt32 xSpacing 0.0 # field SFFloat zDimension 0 # field SFInt32 zSpacing 0.0 # field SFFloat } |
Defines a two dimensional array in x-z plane with height given in height |
texCoord is critical to drape interesting land structure over this framework |
[ elevationgrid.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/elevationgrid.wrl |
Extrusion { set_crossSection # eventIn MFVec2f set_orientation # eventIn MFRotation set_scale # eventIn MFVec2f set_spine # eventIn MFVec3f beginCap TRUE # field SFBool ccw TRUE # field SFBool convex TRUE # field SFBool creaseAngle 0 # field SFFloat crossSection [1 1, 1 -1, -1 -1, -1 1 , 1 1] # field MFVec2f endCap TRUE # field SFBool orientation 0 0 1 0 # field MFRotation -- can be array scale 1 1 # field MFVec2f -- can be array solid TRUE # field SFBool spine [0 0 0, 0 1 0] # field MFVec3f } |
Roughly take the 2D crossSection shape and extend to 3D with spine (vertical extension) using modification given by scale and orientation. |
[ extrusion.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/extrusion.wrl |
The brightness of a face in VRML shape depends on how much it is lit by light in the VRML world. The more a face is oriented toward a light, the more brightly it is shaded by the VRML browser.
|
VRML browser measures the angle between the light rays and the normal of the surface and calculates the shading of that surface. |
It is also possible to provide the surface normals so that the browser does not spend time to calculate them. |
IndexedFaceSet and ElevationGrid node have a normal field. By using a Normal node, it is possible to define these surface normals
|
vector field specifies a list of unit normal vectors that may be used as normals for a shape. |
Texture mapping is a technique used to add detail to a world without creating shapes for every small part of it.
|
It is possible to texture map any VRML shape, except those created by using PointSet or IndexedLineSet nodes. |
Image files are used in a texture map by specifying their URL. |
VRML texture image files can contain a single texture image or a movie containing a series of texture images, like the frames in film. |
JPEG and GIF file formats are used for nonmovie texture images and MPEG format is used for movie images. |
PNG can be used for a nonmovie texture. PNG stands for Portable Network Graphics and it is still in development phase. It is specifically designed for the Web applications. |
When a color texture image is in use, the colors in the texture image are mapped to parts of a VRML shapes as if those shapes were being painted and diffuseColor field of the shape Material node is ignored so that the remaining fields of the Material node is still apply such as transparency. |
When a gray scale texture image is in use, the gray level values of each pixel is multiplied with the diffuseColor field of the shape Material node. |
Some texture image file formats can optionally store a transparency level for each pixel. |
When an image with pixel transparency is texture mapped to a shape, the pixel tranparencies control transparency from point to point across the shape. |
ImageTexture, PixelTexture and MovieTexture nodes can be used to specify a texture for mapping to a shape. These nodes can be used texture field of Appearance node. |
ImageTexture { |
url [] # exposedField MFString |
repeatS TRUE # field SFBool |
repeatT TRUE # field SFBool |
} |
The ImageTexture node can provide the URL address of a texture image file in JPEG, GIF, or PNG file format. |
url field specifies a list of URL addresses for the texture image file. |
PixelTexture node can specify the pixel values explicitly, one by one, from left to right and bottom to top for an image so that we do not need to mention any file name address in the VRML file. |
S and T are "x" and "y" for a texture map and repeatS/repeatT if true specify automatic wrapping of image over Shape
|
[ can.wrl ] |
http://www.npac.syr.edu/users/zeynep/VRML2.0_EXAMPLES/samples/can.wrl |
PixelTexture { |
image 0 0 0 # exposedField SFImage |
repeatS TRUE # field SFBool |
repeatT TRUE # field SFBool |
} |
image field specifies the image size and pixel values for a texture image used to texture map a shape. The first three integer values in the image field are |
The width of the image, in pixels
|
MovieTexture node can specify an MPEG movie texture file for texture mapping.
|
startTime field specifies the time at which the movie texture strarts playing. |
stopTime field spefies the time at which the movie texture stops playing. |
speed field specifies the multiplication factor for speeding/slowing and going forward or backward
|
loop field specifies whether the movie playback loops or not. |
The number of frames in the movie determines the movie's duration. As soon as the movie is read in by the browser, the movie's duration, measured in seconds, is determined and output on the duration_changed eventOut. |
With image textures, it is considered that it has a 2-D graph and x axis is called S axis and y axis is called as T axis. |
Texture image always extends from 0.0 to 1.0 in the S (x) direction and from 0.0 to 1.0 in the T (y) direction in the texture coordinate system. |
TextureCoordinate only has a point field and specifies a list of texture coordinates and may be used as the value of the texCoord field in the IndexedFaceSet and ElevationGrid nodes.
|
It is possible to give a point in a TextureCoordinate node which is outside of the texture image such as beyond the left, right, bottom, or top edges of the image.
|
When wrapping is not in use, it is called clamping. This mode is useful to make sure that a texture image occurs just once. Clamping restricts the texture images in the S, T, or both directions to lie in range 0 to 1. |
repeatS and repeatT fields of texture nodes specifies whether wrapping is on or off in the corresponding Coordinate. |
As the detail in the VRML world increases, VRML browser spends more time to load it and run it. To keep some balance between a detailed representation and a quick rendering is vital. |
One technique is try to take advantage of the fact that shapes farther away from the viewer in the world need not be rendered with much detail as the ones which are closer to the viewer. This is called as level-of-detail technique.
|
Terrain Renderer in VRML is a good example of use of LOD Node |
This group node is used to allow applications to switch between various representations of objects automatically. |
The children of this node typically represent the same object or objects at varying levels of detail, from highest detail to lowest. |
The specified center point of the LOD is transformed by the current transformation into world space, and the distance from the transformed center to the world-space eye point is calculated. |
If the distance is less than the first value in the ranges array, then the first child of the LOD group is drawn. |
If between the first and second values in the ranges array, the second child is drawn, etc. |
If there are N values in the ranges array, the LOD group should have N+1 children. |
Specifying too few children will result in the last child being used repeatedly for the lowest levels of detail; |
If too many children are specified, the extra children will be ignored. |
Each value in the range array should be less than the previous value, otherwise results are undefined. |
FILE FORMAT/DEFAULTS |
LOD { |
exposedField MFNode level [] |
field SFVec3f center 0 0 0 # (-,) |
field MFFloat range [] # (0,) |
} |
level field contains Shape nodes and other grouping nodes. Here Inline node is a good tool to use. The first child provides the highest-detail version of the shape, and subsequent children provide lower-detail version, and so on. |
center field specifies the 3-D coordinate in the coordinate for the shape built within the LOD node. |
range field specifies a list of viewer-to-the center of shape distances at which the browser should switch from one level of detail to another.
|
Adding sound to the VRML world involves two components: the sound source and the sound emitter.
|
AudioClip and MovieTexture nodes provide the sound source component. |
AudioClip { |
description "" # exposedField SFString |
loop FALSE # exposedField SFBool |
pitch 1.0 # exposedField SFFloat |
startTime 0 # exposedField SFTime |
stopTime 0 # exposedField SFTime |
url [] # exposedField MFString |
duration_changed # eventOut SFTime |
isActive # eventOut SFBool } |
url field specifies a sound file that is in WAV or MIDI file format. |
startTime and stopTime fields specify the absolute times at which a sound starts and stops playing. |
loop field indicates whether a sound should be played just once or played repeatedly in a loop. |
pitch field controls the speed with which a sound source is played. |
A MovieTexture node may be used as a sound source only if the movie file referenced by the url field in the MPEG-1 Systems format. |
The Sound node describes a sound emitter in the VRML world. |
Sound { |
direction 0 0 1 # exposedField SFVec3f |
intensity 1 # exposedField SFFloat |
location 0 0 0 # exposedField SFVec3f |
maxBack 10 # exposedField SFFloat |
maxFront 10 # exposedField SFFloat |
minBack 1 # exposedField SFFloat |
minFront 1 # exposedField SFFloat |
priority 0 # exposedField SFFloat |
source NULL # exposedField SFNode |
spatialize TRUE # field SFBool |
} |
location field specifies a 3-D coordinate for the location of the sound emitter in the current coordinate system. |
direction field specifies a 3-D vector indicating the aim direction for the sound emitter. |
intensity field controls the volume of sound emitter. Value 1.0 sets te volume to maximum and 0.0 reduces the volume to zero( turn off the sound emitter). |
source field specifies the sound source for the sound node. |
priority field gives the author some control over which sounds the browser will choose to play when there are more sounds active than sound channels available. |
Browsers should support spatial localization of sound as well as their underlying sound libraries will allow. The spatialize field is used to indicate to browsers that they should try to locate this sound. |
minFront and minBack determine the extent of the full intensity region in front of and behind the sound. |
VRML's backgroud features provide a control over the world's sky and ground colors and enables to specify a set of panorama images to be placed around, above, and below the VRML world. VRML world's sky is an infinitely large sphere placed around the world. It covers the ground as well. The panorama is built a using an infinitely large cube placed around the world but just inside the sky and the ground spheres. |
frontUrl, backUrl, leftUrl, rightUrl, topUrl, and bottomUrl fields of Background node specifies the URL addresses of the corresponding surfaces. |
skyColor defines a list of RGB colors to paint the sky. |
groundColor defines a list of RGB colors to paint the ground. |
Top View Side View |
Background { |
set_bind # eventIn SFBool |
groundAngle [] # exposedField MFFloat |
groundColor [] # exposedField MFColor |
backUrl [] # exposedField MFString |
bottomUrl [] # exposedField MFString |
frontUrl [] # exposedField MFString |
leftUrl [] # exposedField MFString |
rightUrl [] # exposedField MFString |
topUrl [] # exposedField MFString |
skyAngle [] # exposedField MFFloat |
skyColor [ 0 0 0 ] # exposedField MFColor |
isBound # eventOut SFBool |
} |
By using a Fog node, a light haze or a thick, pea-soap fog can be produced. It is also possible to define a color for this fog effect.
|
color specifies the RGB color of a fog. |
visibilityRange specifies the maximum distance between viewer and the shapes in the world so that the shapes which are away from the viewer can not be seen by the viewer because of the fog effect. |
fogType defines the interpolation of fog thickness with repect to the distance between shape and viewer. It can be LINEAR or EXPONENTIAL. |
A viewpoint is a predefined viewing position and orientation in the VRML world. Viewpoint node allows to define the location and viewing direction for the viewpoint. This node is generally added to the children of the Transform node and when the coordinate system moves, so does the viewpoint. |
Switching from one viewpoint to another is handled by the browser's user interface and browsers have well defined ways of changing viewpoint.
|
position field specifies a 3-D coordinate for the location of viewpoint in the current coordinate system. |
orientation field specifies a rotation axis and angle with which to rotate the viewpoint. |
fieldOfView field specifies an angle(in radians) indicating the spread angle of the viewpoints viewing-volume frustrum.
|
description field specifies a text string used to describe the viewpoint. |
Viewpoints are kept in a stack and the top of the stack is current viewpoint.
|
When a jump (is TRUE) viewpoint is placed on top of the viewpoint stack, the viewer automatically jumps to that Viewpoint node's position, orientation, and field of view. |
When a no-jump viewpoint is placed on top of the viewpoint stack, by selecting it from a browser's menu for example, the user retains his or her current view. |
In any case, the viewer's movements are now relative to the new current viewpoint node and to the coordinate system in which that viewpoint is built. |
An avatar is a symbolic-virtual-world representation of a real-world person in virtual reality. Using an avatar, the real-world person moves through the virtual world, seeing what the avatar sees, and interacting by telling the avatar what to do. In a multi-user virtual reality environment, each user in the environment chooses a 3-D shape as a represantative in the VRML world. |
A description of an avatar can be divided into two sections:
|
The movement features of the avatar are primarily controlled by the NavigationInfo node. |
NavigationInfo { |
set_bind # eventIn SFBool |
avatarSize [ 0.25, 1.6, 0.75 ] # exposedField MFFloat |
headlight TRUE # exposedField SFBool |
speed 1.0 # exposedField SFFloat |
type "WALK" # exposedField MFString |
visibilityLimit 0.0 # exposedField SFFloat |
isBound # eventOut SFBool } |
These parameter values are typically set in Browser window |
type field specifies the style of motion such as WALK, FLY, EXAMINE and NONE. |
speed defines the recommended avatar motion speed, measured in units per second. |
avatarSize field defines the size characteristics of the viewer's avatar. |
headlight field turns on/off the avatar's headlight. A headlight is a directional light that always points in the direction the user is looking. Setting this field to TRUE allows the browser to provide a headlight, possibly with user interface controls to turn it on and off. |
visibilityLimit field defines the distance to the far end of the viewing-volume frustrum and establishes the farthest distance the viewer can see. |