This section describes key concepts relating to the definition and use of the VRML specification. This includes how nodes are combined into scene graphs, how nodes receive and generate events, how to create node types using prototypes, how to add node types to VRML and export them for use by others, how to incorporate programmatic scripts into a VRML file, and various general topics on nodes.
For easy identification of VRML files, every VRML 2.0 file must begin with the characters:
#VRML V2.0 utf8
The identifier utf8 allows for international characters to be displayed in VRML using the UTF-8 encoding of the ISO 10646 standard. Unicode is an alternate encoding of ISO 10646. UTF-8 is explained under the Text node.
Any characters after these on the same line are ignored. The line is terminated by either the ASCII newline or carriage-return characters.
The # character begins a comment; all characters until the next newline or carriage return are ignored. The only exception to this is within double-quoted SFString and MFString fields, where the # character will be part of the string.
Note: Comments and whitespace may not be preserved; in particular, a VRML document server may strip comments and extra whitespace from a VRML file before transmitting it. WorldInfo nodes should be used for persistent information such as copyrights or author information. To extend the set of existing nodes in VRML 2.0, use prototypes or external prototypes rather than named information nodes.
Commas, blanks, tabs, newlines and carriage returns are whitespace characters wherever they appear outside of string fields. One or more whitespace characters separate the syntactical entities in VRML files, where necessary.
After the required header, a VRML file can contain any combination of the following:
See the "Grammar Reference" annex for precise grammar rules.
Field, event, prototype, and node names must not begin with a digit (0x30-0x39) but may otherwise contain any characters except for non-printable ASCII characters (0x0-0x20), double or single quotes (0x22: ", 0x27: '), sharp (0x23: #), plus (0x2b: +), comma (0x2c: ,), minus (0x2d: -), period (0x2e: .), square brackets (0x5b, 0x5d: []), backslash (0x5c: \) or curly braces (0x7b, 0x7d: {}). Characters in names are as specified in ISO 10646, and are encoded using UTF-8. VRML is case-sensitive; "Sphere" is different from "sphere" and "BEGIN" is different from "begin."
The following reserved keywords shall not be used for node, PROTO, EXTERNPROTO, or DEF names:
DEF |
EXTERNPROTO |
FALSE |
IS |
NULL |
PROTO |
ROUTE |
TO |
TRUE |
USE |
eventIn |
eventOut |
exposedField |
field |
In this document, the first item in a node
specification is the public interface for the node. The syntax for
the public interface is the same as that for that node's prototype.
This interface is the definitive specification of the fields, events,
names, types, and default values for a given node. Note that this
syntax is not the actual file format syntax. However, the parts
of the interface that are identical to the file syntax are in bold.
For example, the following defines the Collision
node's
public interface and file format:
Collision { eventIn MFNode addChildren eventIn MFNode removeChildren exposedField MFNode children [] exposedField SFBool collide TRUE field SFVec3f bboxCenter 0 0 0 field SFVec3f bboxSize -1 -1 -1 field SFNode proxy NULL eventOut SFTime collideTime }
Fields that have associated implicit set_ and _changed events are labeled exposedField. For example, the on field has an implicit set_on input event and an on_changed output event. Exposed fields may be connected using ROUTE statements, and may be read and/or written by Script nodes. Also, any exposedField or EventOut name can be prefixed with get_ to indicate a read of the current value of the eventOut. This is used only in Script nodes or when accessing the VRML world from an external API.
Note that this information is arranged in a slightly different manner
in the actual file syntax. The keywords "field
"
or "exposedField
" and the types of the fields
(e.g. SFColor
) are not specified when expressing a
node in the file format. An example of the file format for the Collision
node is:
Collision { children [] collide TRUE bboxCenter 0 0 0 bboxSize -1 -1 -1 proxy NULL }
The rules for naming fields, exposedFields, eventOuts and eventIns for the built-in nodes are as follows:
User defined field names found in Script and PROTO nodes are recommended to follow these naming conventions, but are not required.
A URL (Uniform Resource Locator) [URL] specifies a file located on a particular server and accessed through a specified protocol (e.g. http). A URN (Uniform Resource Name) [URN] provides a more abstract way to refer to data than is provided by a URL.
All URL/URN fields are of type MFString. The strings in the field indicate multiple locations to look for data, in decreasing order of preference. If the browser cannot locate the first URL/URN or doesn't support the protocol type, then it may try the second location, and so on. Note that the URL and URN field entries are delimited by " ", and due to the "Data Protocol "and the "Scripting Language Protocols" are a superset of the standard URL syntax (IETF RFC 1738). Browsers may skip to the next URL/URN by searching for the closing, un-escaped ". See "Field and Event Reference - SFString and MFString" for details on the string field.
URLs are described in "Uniform Resource Locator", IETF RFC 1738, http://ds.internic.net/rfc/rfc1738.txt.
Relative URLs are handled as described in "Relative Uniform Resource Locator", IETF RFC 1808, http://ds.internic.net/rfc/rfc1808.txt.
VRML 2.0 browsers are not required to support URNs. If they do not support URNs, they should ignore any URNs that appear in MFString fields along with URLs.
See "URN's" for more details on URNs.
The IETF is in the process of standardizing a "Data:" URL to be used for inline inclusion of base64 encoded data, such as JPEG images. This capability should be supported as specified in: "Data: URL scheme", http://www.internic.net/internet-drafts/draft-masinter-url-data-01.txt., [DATA]. Note that this is an Internet Draft, and the specification may (but is unlikely to) change.
The Script node's URL field may also support a custom protocol for the various scripting languages. For example, a script URL prefixed with javascript: shall contain JavaScript source, with newline characters allowed in the string. A script prefixed with javabc: shall contain Java bytecodes using a base64 encoding. The details of each language protocol are defined in the appendix for each language. Browsers are not required to support any specific scripting language, but if they do then they shall adhere to the protocol for that particular scripting language. The following example, illustrates the use of mixing custom protocols and standard protocols in a single url (order of precedence determines priority):
#VRML V2.0 utf8 Script { url [ "javascript: ...", # custom protocol JavaScript "http://bar.com/foo.js", # std protocol JavaScript "http://bar.com/foo.class" ] # std protocol Java byte }
The file extension for VRML files is .wrl (for world).
The official MIME type for VRML files is defined as:
model/vrml
where the MIME major type for 3D data descriptions is model
,
and the minor type for VRML documents is vrml
.
For historical reasons (VRML 1.0) the following MIME type must also be supported:
x-world/x-vrml
where the MIME major type is x-world, and the minor type for
VRML documents is x-vrml
.
IETF work-in-progress on this subject can be found in "The Model Primary Content Type for Multipurpose Internet Mail Extensions", (ftp://ds.internic.net/internet-drafts/draft-nelson-model-mail-ext-01.txt).
URN's are location independent pointers to a file, or to different representations of the same content. In most ways they can be used like URL's except that when fetched a smart browser should fetch them from the closest source. While URN resolution over the net has not been standardized yet, they may be used now as persistent unique identifiers for files, prototypes, textures etc. For more information on the standardization effort see: http://services.bunyip.com:8000/research/ietf/urn-ietf/ . VRML 2.0 browsers are not required to support URN's however they are required to ignore them if they do not support them.
URN's may be assigned by anyone with a domain name, for example if the company Foo owns foo.com then it may allocate URN's that begin with "urn:inet:foo.com:" such as, for example "urn:inet:foo.com:texture/wood01". No special semantics are required of the string following the prefix, except that they should be lower case, and characters should be "URL" encoded as specified in RFC1738.
To reference a texture, proto or other file by URN it should be included in the url field of another node, for example:
ImageTexture { url [ "http://www.foo.com/textures/woodblock_floor.gif", "urn:inet:foo.com:textures/wood001" ] }
specifies a URL file as the first choice and URN as the second choice. Note that until URN resolution is widely deployed, it is advisable to include a URL alternative whenever a URN is used. See http://earth.path.net/mitra/papers/vrml-urn.html for more details and recommendations.
At the highest level of abstraction, VRML is simply a file format for describing objects. Theoretically, the objects can contain anything -- 3D geometry, MIDI data, JPEG images, and so on. VRML defines a set of objects useful for doing 3D graphics, multi-media, and interactive object/world building. These objects are called nodes, and contain elemental data which is stored in fields and events.
A node has the following characteristics:
exposedField foo
is equivalent to the declaration:
eventIn set_foo field foo eventOut foo_changed
where set_foo, if written to, automatically sets the value of the field foo and generates a foo_changed eventOut.
The file syntax for representing nodes is as follows:
nodetype { fields }
Only the node type and braces are required; nodes may or may not have field values specified. Unspecified field values are set to the default values in the specification.
This section describes the general scene graph hierarchy, how to reuse nodes within a file, coordinate systems and transformations in VRML files, and the general model for viewing and interaction within a VRML world.
Grouping nodes are used to create hierarchical transformation graphs. Grouping nodes have a children field that contains a list of nodes which are the transformation descendants of the group. Each grouping node defines a coordinate space for its children. This coordinate space is relative to the parent node's coordinate space--that is, transformations accumulate down the scene graph hierarchy. Children nodes are restricted to the following node types:
All grouping nodes also have addChildren and removeChildren eventIn definitions. The addChildren event adds the node(s) passed in to the grouping node's children field. Any nodes passed to the addChildren event that are already in the group's children list are ignored. The removeChildren event removes the node(s) passed in from the grouping node's children field. Any nodes passed in the removeChildren event that are not in the grouping nodes's children list are ignored.
The following nodes are grouping nodes:
Anchor |
Billboard |
Collision |
Group |
Transform |
A node may be referenced in a VRML file multiple times. This is called instancing (using the same instance of a node multiple times; called "sharing", "aliasing" or "multiple references" by other systems) and is accomplished by using the DEF and USE keywords.
The DEF keyword defines a node's name and creates a node of that type. The USE keyword indicates that a reference to a previously named node should be inserted into the scene graph. This has the affect of sharing a single node in more than one location in the scene. If the node is modified, then all references to that node are modified. DEF/USE name scope is limited to a single file. If multiple nodes are given the same name, then the last DEF encountered during parsing is used for USE definitions.
Tools that create VRML files may need to modify user-defined node names to ensure that a multiply instanced node with the same name as some other node will be read correctly. The recommended way of doing this is to append an underscore followed by an integer to the user-defined name. Such tools should automatically remove these automatically generated suffixes when VRML files are read back into the tool (leaving only the user-defined names).
Similarly, if an un-named node is multiply instanced, tools will have to automatically generate a name to correctly write the VRML file. The recommended form for such names is just an underscore followed by an integer.
VRML provides no capability to define units of measure. All linear distances are assumed to be in meters and all angles are in radians. Time units are specified in seconds. Colors are specified in the RGB (Red-Green-Blue) color space and are restricted to the 0.0 to 1.0 range.
VRML uses a Cartesian, right-handed, 3-dimensional coordinate system. By default, objects are projected onto a 2-dimensional display device by projecting them in the direction of the positive Z-axis, with the positive X-axis to the right and the positive Y-axis up. A modeling transformation (Transform and Billboard) or viewing transformation (Viewpoint) can be used to alter this default projection.
Scenes may contain an arbitrary number of local (or object-space) coordinate systems, defined by the transformation fields of the Transform and Billboard nodes.
Conceptually, VRML also has a world coordinate system. The various local coordinate transformations map objects into the world coordinate system, which is where the scene is assembled. Transformations accumulate downward through the scene graph hierarchy, with each Transform and Billboard inheriting transformations of their parents. (Note however, that this series of transformations takes effect from the leaf nodes up through the hierarchy. The local transformations closest to the Shape object take effect first, followed in turn by each successive transformation upward in the hierarchy.)
This specification assumes that there is a real person viewing and interacting with the VRML world. The VRML author may place any number of viewpoints in the world -- interesting places from which the user might wish to view the world. Each viewpoint is described by a Viewpoint node. Viewpoints exist in a specific coordinate system, and both the viewpoint and the coordinate system may be animated. Only one Viewpoint may be active at a time. See the description of "Bindable Children Nodes" for details. When a viewpoint is activated, the browser parents its view (or camera) into the scene graph under the currently active viewpoint. Any changes to the coordinate system of the viewpoint have effect on the browser view. Therefore, if a user teleports to a viewpoint that is moving (one of its parent coordinate systems is being animated), then the user should move along with that viewpoint. It is intended, but not required, that browsers support a user-interface by which users may "teleport" themselves from one viewpoint to another.
Several of the nodes in this specification include a bounding box field. This is typically used by grouping nodes to provide a hint to the browser on the group's approximate size for culling optimizations. The default size for bounding boxes (-1, -1, -1) implies that the user did not specify the bounding box and the browser must compute it or assume the most conservative case. A bboxSize value of (0, 0, 0) is valid and represents a point in space (i.e. infinitely small box). Note that the bounding box of may change as a result of changing children. The bboxSize field values must be >= 0.0. Otherwise, results are undefined. The bboxCenter fields specify a translation offset from the local coordinate system and may be in the range: -infinity to +infinity.
The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside a grouping node (e.g. Transform). These are used as hints to optimize certain operations such as determining whether or not the group needs to be drawn. If the specified bounding box is smaller than the true bounding box of the group, results are undefined. The bounding box should be large enough to completely contain the effects of all sounds, lights and fog nodes that are children of this group. If the size of this group may change over time due to animating children, then the bounding box must also be large enough to contain all possible animations (movements). The bounding box should typically be the union of the group's children bounding boxes; it should not include any transformations performed by the group itself (i.e. the bounding box is defined in the local coordinate system of the group).
Most nodes have at least one eventIn definition and thus can receive events. Incoming events are data messages sent by other nodes to change some state within the receiving node. Some nodes also have eventOut definitions. These are used to send data messages to destination nodes that some state has changed within the source node.
If an eventOut is read before it has sent any events (e.g. get_foo_changed), the initial value as specified in "Field and Event Reference" for each field/event type is returned.
The connection between the node generating the event and the node receiving the event is called a route. A node that produces events of given type can be routed to a node that receives events of the same type using the following syntax:
ROUTE NodeName.eventOutName_changed TO NodeName.set_eventInName
The prefix set_ and the suffix _changed are recommended conventions, not strict rules. Thus, when creating prototypes or scripts, the names of the eventIns and the eventOuts may be any legal identifier name. Note however, that exposedField's implicitly define set_xxx as an eventIn, xxx_changed as an eventOut, and xxx as a field for a given exposedField named xxx. It is strongly recommended that developers follow these guidelines when creating new types. There are three exceptions in the VRML Specification to this recommendation: Boolean events, Time events, and children events. All SF/MFBool eventIns and eventOuts are named isFoo (e.g. isActive). All SF/MFTime eventIns and eventOuts are named fooTime (e.g. enterTime). The eventIns on groups for adding and removing children are named: addChildren and removeChildren. These exceptions were made to improve readability.
Routes are not nodes; ROUTE is merely a syntactic construct for establishing event paths between nodes. ROUTE statements may appear at either the top-level of a .wrl file or prototype implementation, or may appear inside a node wherever fields may appear.
The types of the eventIn and the eventOut must match exactly. For example, it is illegal to route from an SFFloat to an SFInt32 or from an SFFloat to an MFFloat.
Routes may be established only from eventOuts to eventIns. Since exposedField's implicitly define a field, an eventIn, and an eventOut, it is legal to use the exposedField's defined name when routing to and from it, (rather than specifying the set_ prefix and _changed suffix). For example, the following TouchSensor's enabled exposedField is routed to the DirectionalLight's on exposed field. Note that each of the four routing examples below are legal syntax:
DEF CLICKER TouchSensor { enabled TRUE } DEF LIGHT DirectionalLight { on FALSE } ROUTE CLICKER.enabled TO LIGHT.on or ROUTE CLICKER.enabled_changed TO LIGHT.on or ROUTE CLICKER.enabled TO LIGHT.set_on or ROUTE CLICKER.enabled_changed TO LIGHT.set_on
Redundant routing is ignored. If a file repeats a routing path, the second (and all subsequent identical routes) are ignored. Likewise for dynamically created routes via a scripting language supported by the browser.
Sensor nodes generate events. Geometric sensor nodes (ProximitySensor, VisibilitySensor, TouchSensor, CylinderSensor, PlaneSensor, SphereSensor and the Collision group) generate events based on user actions, such as a mouse click or navigating close to a particular object. TimeSensor nodes generate events as time passes. See "Sensor Nodes" for more details on the specifics of sensor nodes.
Each type of sensor defines when an event is generated. The state of the scene graph after several sensors have generated events must be as if each event is processed separately, in order. If sensors generate events at the same time, the state of the scene graph will be undefined if the results depends on the ordering of the events (world creators must be careful to avoid such situations).
It is possible to create dependencies between various types of sensors. For example, a TouchSensor may result in a change to a VisibilitySensor's transformation, which may cause it's visibility status to change. World authors must be careful to avoid creating indeterministic or paradoxical situations (such as a TouchSensor that is active if a VisibilitySensor is visible, and a VisibilitySensor that is NOT visible if a TouchSensor is active).
Once a Sensor or Script has generated an initial event, the event is propagated along any ROUTES to other nodes. These other nodes may respond by generating additional events, and so on. This process is called an event cascade. All events generated during a given event cascade are given the same timestamp as the initial event (they are all considered to happen instantaneously).
Some sensors generate multiple events simultaneously; in these cases, each event generated initiates a different event cascade.
Event cascades may contain loops, where an event 'E' is routed to a node that generated an event that eventually resulted in 'E' being generated. Loops are broken as follows: implementations must not generate two events from the same eventOut that have identical timestamps. Note that this rule also breaks loops created by setting up cyclic dependencies between different Sensor nodes.
Fan-in occurs when two or more routes write to the same eventIn. If two events with different values but the same timestamp are received at an eventIn, then the results are undefined. World creators must be careful to avoid such situations.
Fan-out occurs when one eventOut routes to two or more eventIns. This case is perfectly legal and results in multiple events sent with the same values and the same timestamp.
The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will roughly correspond to "real" time. A world's creator must make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will be greater than any previous time event.
Time (0.0) starts at 00:00:00 GMT January 1, 1970.
Events that are "in the past" cannot be generated; processing an event with timestamp 't' may only result in generating events with timestamps greater than or equal to `t'.
VRML does not distinguish between discrete events (like those generated by a TouchSensor) and events that are the result of sampling a conceptually continuous set of changes (like the fraction events generated by a TimeSensor). An ideal VRML implementation would generate an infinite number of samples for continuous changes, each of which would be processed infinitely quickly.
Before processing a discrete event, all continuous changes that are occurring at the discrete event's timestamp should behave as if they generate events at that same timestamp.
Beyond the requirements that continuous changes be up-to-date during the processing of discrete changes, implementations are free to otherwise sample continuous changes as often or as infrequently as they choose. Typically, a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per "frame," where a "frame" is a single rendering of the world or one time-step in a simulation.
Prototyping is a mechanism that allows the set of node types to be extended from within a VRML file. It allows the encapsulation and parameterization of geometry, attributes, behaviors, or some combination thereof.
A prototype definition consists of the following:
Square brackets enclose the list of events and fields, and braces enclose the definition itself:
PROTO prototypename [ eventIn eventtypename name eventOut eventtypename name exposedField fieldtypename name defaultValue field fieldtypename name defaultValue ... ] { Zero or more routes and prototypes First node (defines the node type of this prototype) Zero or more nodes (of any type), routes, and prototypes }
The names of the fields, exposedFields, eventIns, and eventOuts must be unique for a single prototype (or built-in node). Therefore, the following prototype is illegal:
PROTO badNames [ field SFBool foo eventOut SFColor foo eventIn SFVec3f foo exposedField SFString foo ] {...}
because the name foo is overloaded. Prototype and built-in node field and event name spaces do not overlap. Therefore, it is legal to use the same names in different prototypes, as follows:
PROTO foo [ field SFBool foo eventOut SFColor foo2 eventIn SFVec3f foo3 ] {...} PROTO bar [ field SFBool foo eventOut SFColor foo2 eventIn SFVec3f foo3 ] {...}
A prototype statement does not define an actual instance of node in the scene. Rather, it creates a new node type (named prototypename) that can be created later in the same file as if it were a built-in node. It is thus necessary to define a node of the type of the prototype to actually create an object. For example, the following file is an empty scene with a fooSphere prototype that serves no purpose:
#VRML V2.0 utf8 PROTO fooSphere [ field SFFloat fooRadius 3.0 ] { Sphere { radius 3 # default radius value for fooSphere radius IS fooRadius # associates radius with fooRadius } }
In the following example, a fooSphere is created and thus produces a visible result:
#VRML V2.0 utf8 PROTO fooSphere [ field SFFloat fooRadius 3.0 ] { Sphere { radius 3 # default radius value for fooSphere radius IS fooRadius # associates radius with fooRadius }}
fooSphere { fooRadius 42.0 }
The first node found in the prototype definition is used to define the node type of this prototype. This first node type determines how instantiations of the prototype can be used in a VRML file. An instantiation is created by filling in the parameters of the prototype declaration and inserting the first node (and its scene graph) wherever the prototype instantiation occurs. For example, if the first node in the prototype definition is a Material node, then instantiations of the prototype can be used wherever a Material can be used. Any other nodes and accompanying scene graphs are not rendered, but may be referenced via routes or scripts (and thus cannot be ignored). The following example defines a RampMaterial prototype which animates a Material's diffuseColor continuously and that must be used wherever a Material can be used in the file (i.e. within an Appearance node):
#VRML V2.0 utf8 PROTO RampMaterial [ field MFColor colors 0 0 0 field SFTime cycle 1 ] { DEF M Material {} DEF C ColorInterpolator { keyValue IS colors key ... } DEF T TimeSensor { enabled TRUE loop TRUE cycleInterval IS cycle } ROUTE T.fraction_changed TO C.set_fraction ROUTE C.value_changed TO M.diffuseColor}
Transform {
children Shape { geometry Sphere {} appearance Appearance { material RampMaterial { colors [ 1 0 0, 0 0 1, 1 0 0 ] # red to green to red cycle 3.0 # 3 second cycle } } } }
The next example defines a SphereCone (fused Sphere and Cone) and illustrates how the first node in the prototype definition may contain a complex scene graph:
#VRML V2.0 utf8 PROTO SphereCone [ field SFFloat radius 2.0 field SFFloat height 5.0 field SFNode sphereApp NULL field SFNode coneApp NULL ] {Transform {
children [ Shape { appearance IS sphereApp geometry Sphere { radius IS radius } } Shape { appearance IS coneApp geometry Cone { height IS height } } ] } }Transform {
translation 15 0 0 children SphereCone { radius 5.0 height 20.0 sphereApp Appearance { material Material { ... } } coneApp Appearance { texture ImageTexture { ... } } } }Transform {
translation -10 0 0 children SphereCone { # default proto's radius and height sphereApp Appearance { texture ImageTexture { ... } } coneApp Appearance { material Material { ... } } } }
PROTO and EXTERNPROTO statements may appear anywhere ROUTE statements may appear-- either at the top-level of a file or a prototype definition, or wherever fields may appear.
The eventIn and eventOut prototype declarations receive and send events to and from the prototype's definition. Each eventIn in the prototype declaration is associated with an eventIn or exposedField defined in the prototype's node definition via the IS syntax. The eventIn declarations define the events that the prototype can receive. Each eventOut in the prototype declaration is associated with an eventOut or exposedField defined in the prototype's node definition via the IS syntax. The eventOut declarations define the events that the prototype can send. For example, the following statement exposes a Transform node's set_translation event by giving it a new name (set_position) in the prototype interface:
PROTO FooTransform [ eventIn SFVec3f set_position ] { Transform { set_translation IS set_position } }
Fields, (exposedField and field), specify the initial state of nodes. Defining fields in a prototype's declaration allows the initial state of associated fields in the prototype definition to be specified when an instance of the prototype is created. The fields of the prototype are associated with fields in the node definition using the IS keyword. Field default values must be specified in the prototype declaration. For example:
PROTO BarTransform [ exposedField SFVec3f position 42 42 42 ] { Transform { translation IS position translation 100 100 100 } }
defines a prototype, BarTransform, that specifies the initial values (42, 42, 42) of the position exposed field . The position field is associated with the translation field of the Tranform node in the prototype definition using the IS syntax. Note that the field values in the prototype definition for translation (100, 100, 100) are legal, but overridden by the prototype declaration defaults.
Note that in some cases, it is necessary to specify the field defaults inside the prototype definition. For example, the following prototype associates the prototype definition's Material node diffuseColor (exposedField) to the prototype declaration's eventIn myColor and also defines the default diffuseColor values:
PROTO foo [ eventIn myColor ] {
Material {
diffuseColor 1 0 0
diffuseColor IS myColor # or set_diffuseColor IS myColor
}
}
IS statements may appear inside the prototype definition wherever fields may appear. IS statements must refer to fields or events defined in the prototype declaration. Inversely, it is an error for an IS statement to refer to a non-existent declaration. It is an error if the type of the field or event being associated does not match the type declared in the prototype's interface declaration. For example, it is illegal to associate an SFColor with an SFVec3f, and it is also illegal to associate an SFColor with an MFColor, and vice versa. The following table defines the rules for mapping between the prototype declarations and the primary scene graph's nodes (yes denotes a legal mapping, no denotes an error):
Prototype |
declaration |
||||
exposedField |
field |
eventIn |
eventOut |
||
N |
exposedField |
yes | yes | yes | yes |
o |
field |
no | yes | no | no |
d |
eventIn |
no | no | yes | no |
e |
eventOut |
no | no | no | yes |
Specifying the field and event types both in the prototype declaration and in the node definition is intended to prevent user errors and to provide consistency with "External Prototypes".
A prototype is instantiated as if prototypename were a built-in node. The prototype name must be unique within the scope of the file, and cannot rename a built-in node or prototype.
Prototype instances may be named using DEF and may be multiply instanced using USE as any built-in node. A prototype instance can be used in the scene graph wherever the first node of the primary scene graph can be used. For example, a prototype defined as:
PROTO MyObject [ ... ] { Box { ... } ROUTE ... Script { ... } ... }
may be instantiated wherever a Box may be used (e.g. Shape node's geometry field), since the first node of the prototype definition is a Box.
A prototype's scene graph defines a DEF/USE name scope separate from the rest of the scene; nodes DEF'd inside the prototype may not be USE'd outside of the prototype's scope, and nodes DEF'ed outside the prototype scope may not be USE'ed inside the prototype scope.
Prototype definitions appearing inside a prototype implementation (i.e. nested) are local to the enclosing prototype. For example, given the following:
PROTO one [...] { PROTO two [...] { ... } ... two { } # Instantiation inside "one": OK } two { } # ERROR: "two" may only be instantiated inside "one".
The second instantiation of "two" is illegal. IS statements inside a nested prototype's implementation may refer to the prototype declarations of the innermost prototype. Therefore, IS statements in "two" cannot refer to declarations in "one".
A prototype may be instantiated in a file anywhere after the completion of the prototype definition. A prototype may not be instantiated inside its own implementation (i.e. recursive prototypes are illegal). The following example produces an error:
PROTO Foo [] { Foo {} }
The syntax for defining prototypes in external files is as follows:
EXTERNPROTO extern prototypename [ eventIn eventtypename name eventOut eventtypename name field fieldtypename name exposedField fieldtypename name ] ... ] "URL/URN" or [ "URL/URN", "URL/URN", ... ]
The external prototype is then given the name externprototypename in this file's scope. It is an error if the eventIn/eventOut declaration in the EXTERNPROTO is not a subset of the eventIn/eventOut declarations specified in the PROTO referred to by the URL. If multiple URLs or URNs are specified, the browser searches in the order of preference (see "URLs and URNs").
Unlike a prototype, an external prototype does not contain an inline implementation of the node type. Instead, the prototype implementation is fetched from a URL or URN. The other difference between a prototype and an external prototype is that external prototypes do not contain default values for fields. The external prototype references a file that contains the prototype implementation, and this file contains the field default values.
The URL/URNs refer to legal VRML files in which the first prototype found in the file is used to define the external prototype's definition. Note that the prototypename does not need to match the externprotoname. The following example illustrates how an external prototype's declaration may be a subset of the prototype's declaration (diff vs. diffuse and shiny) and how the external prototype's typename may differ from the prototype's typename (e.g. FooBar != SimpleMaterial):
foo.wrl: ------- #VRML V2.0 utf8 EXTERNPROTO FooBar [ eventIn SFColor diff ] "http://foo.com/coolNode.wrl ... http://foo.com/coolNode.wrl: --------------------------- #VRML V2.0 utf8 PROTO SimpleMaterial [ exposedField SFColor diffuse 1 0 0 eventIn SFFloat shiny 0.5 ] { Material { ... } }
To allow the creation of libraries of small, reusable PROTO definitions, browsers shall recognize EXTERNPROTO URLs that end with "#name" to mean the prototype definition of "name" in the given file. For example, a library of standard materials might be stored in a file called "materials.wrl" that looks like:
#VRML V2.0 utf8 PROTO Gold [] { Material { ... } } PROTO Silver [] { Material { ... } } ...etc.
A material from this library could be used as follows:
#VRML V2.0 utf8 EXTERNPROTO Gold [] "http://.../materials.wrl#Gold" ... Shape { appearance Appearance { material Gold {} } geometry ... }
The advantage is that only one http fetch needs to be done if several things are used from the library; the disadvantage is that the entire library will be transmitted across the network even if only one prototype is used in the file.
Decision logic and state management is often needed to decide what effect an event should have on the scene -- "if the vault is currently closed AND the correct combination is entered, then open the vault." These kinds of decisions are expressed as Script nodes (see "Nodes Reference - Script") that receive events from other nodes, process them, and send events to other nodes. A Script node can also keep track of information between execution, (i.e. managing internal state over time). This section describes the general mechanisms and semantics that all scripting languages must support. See the specific scripting language appendix for the syntax and details of any language (see "Appendix C. Java Reference" and "Appendix D. JavaScript Reference").
Event processing is done by a program or script contained in (or referenced by) the Script node's url field. This program or script can be written in any programming language that the browser supports. Browsers are not required to implement any specific scripting languages in VRML 2.0.
A Script node is activated when it receives an event. At that point the browser executes the program in the Script node's url field (passing the program to an external interpreter if necessary). The program can perform a wide variety of actions: sending out events (and thereby changing the scene), performing calculations, communicating with servers elsewhere on the Internet, and so on. See "Execution Model" for a detailed description of the ordering of event processing.
Scripts nodes allow the world author to insert logic into the middle of an event cascade. Scripts also allow the world author to generate an event cascade when a Script node is created or, in some scripting languages, at arbitrary times.
Script nodes receive events in timestamp order. Any events generated as a result of processing an event are given timestamps corresponding to the event that generated them. Conceptually, it takes no time for a Script node to receive and process an event, even though in practice it does take some amount of time to execute a Script.
The scripting language binding may define an initialize method (or constructor). This method is called before any events are generated. Events generated by the initialize method must have timestamps less than any other events that are generated by the Script node.
Likewise, the scripting language binding may define a shutdown method (or destructor). This method is called when the corresponding Script node is deleted or the world containing the Script node is unloaded or replaced by another world. This can be used as a clean up operation, such as informing external mechanisms to remove temporary files.
The scripting language binding may also define an eventsProcessed routine that is called after one or more events are received. It allows Scripts that do not rely on the order of events received to generate fewer events than an equivalent Script that generates events whenever events are received. If it is used in some other way, eventsProcessed can be non-deterministic, since different implementations may call eventsProcessed at different times.
For a single event cascade, a given Script node's eventsProcessed routine must be called at most once. Events generated from an eventsProcessed routine are given the timestamp of the last event processed.
Scripts that have access to other nodes (via SFNode or MFNode fields or eventIns) and that have their "directOutputs" field set to TRUE may directly post eventIns to those nodes. They may also read the last value sent from any of the node's eventOuts.
When setting a value in another node, implementations are free to either immediately set the value or to defer setting the value until the Script is finished. When getting a value from another node, the value returned must be up-to-date; that is, it must be the value immediately before the time of the current timestamp (the current timestamp is the timestamp of the event that caused the Script node to execute).
The order of execution of Script nodes that do not have ROUTES between them is undefined. If multiple directOutputs Scripts all read and/or write the same node, the results may be undefined. Just as with ROUTE fan-in, these cases are inherently non-deterministic and it is up to the world creator to ensure that these cases do not happen.
Some languages supported by VRML browser may allows Script nodes to spontaneously generate events, allowing users to create Script nodes that function like new Sensor nodes. In these cases, the Script is generating the initial event that cause the event cascade, and the scripting language and/or the browser will determine an appropriate timestamp for that initial event. Such events are then sorted into the event stream and processed like any other event, following all of the same rules for looping, etc.
The Script node's url field may specify a URL which refers to a file (e.g. http:) or directly inlines (e.g. javabc:) scripting language code. The mime-type of the returned data defines the language type. Additionally instructions can be included inline using either the data: protocol (which allows a mime-type specification) or a "Scripting Language Protocol" defined for the specific language (in which the language type is inferred).
Events received by the Script node are passed to the appropriate scripting language function in the script. The function's name depends on the language type used--in some cases it is identical to name of the eventIn, while in others it is a general callback function for all eventIns (see the language appendices for details). The function is passed two arguments, the event value and the event timestamp.
For example, the following Script node has one eventIn field named start and three different URL values specified in the url field: JavaScript, Java, and inline JavaScript:
Script { eventIn SFBool start url [ "http://foo.com/fooBar.class", "http://foo.com/fooBar.js", "javascript:function start(value, timestamp) { ... }" ] }
In the above example when a start eventIn is received by the Script node, one of the scripts found in the url field is executed. The Java code is the first choice, the JavaScript code is the second choice, and the inline JavaScript code the third choice - see "URLs and URNs" for a description of order of preference for multiple valued URL fields. In the above example, .
The fields, eventIns and eventOuts of a Script node are accessible from scripting language functions. The Script's eventIns can be routed to and its eventOuts can be routed from. Another Script node with a pointer to this node can access its eventIns and eventOuts just like any other node.
Fields defined in the Script node are available to the script through a language specific mechanism (e.g. a member variable is automatically defined for each field and event of the Script node). The field values can be read or written and are persistent across function calls. EventOuts defined in the script node can also be read - the value is the last value sent.
The script can access any exposedField, eventIn or eventOut of any node to which it has a pointer. The syntax of this mechanism is language dependent. The following example illustrates how a Script node accesses and modifies an exposed field of another node (i.e. sends a set_translation eventIn to the Transform node) using a fictitious scripting language:
DEF SomeNode Transform { } Script { field SFNode tnode USE SomeNode eventIn SFVec3f pos directOutput TRUE url "... function pos(value, timestamp) { tnode.set_translation = value; }" }
Each scripting language provides a mechanism for allowing scripts to send a value through an eventOut defined by the Script node. For example, one scripting language may define an explicit function for sending each eventOut, while another language may use assignment statements to automatically defined eventOut variables to implicitly send the eventOut. The results of sending multiple values through an eventOut during a single script execution are undefined - it may result in multiple eventOuts with the same timestamp or a single event out with the value of the last assigned value.
The browser interface provides a mechanism for scripts contained by Script nodes to get and set browser state, such as the URL of the current world. This section describes the semantics that functions/methods that the browser interface supports. A C-like syntax is used to define the type of parameters and returned values, but is hypothetical. See the specific appendix for a language for the actual syntax required. In this hypothetical syntax, types are given as VRML field types. Mapping of these types into those of the underlying language (as well as any type conversion needed) is described in the appropriate language reference.
The getName() and getVersion() methods get the "name" and "version" of the browser currently in use. These values are defined by the browser writer, and identify the browser in some (unspecified) way. They are not guaranteed to be unique or to adhere to any particular format, and are for information only. If the information is unavailable these methods return empty strings.
The getCurrentSpeed() method returns the speed at which the viewpoint is currently moving, in meters per second. If speed of motion is not meaningful in the current navigation type, or if the speed cannot be determined for some other reason, 0.0 is returned.
The getCurrentFrameRate() method returns the current frame rate in frames per second. The way in which this is measured and whether or not it is supported at all is browser dependent. If frame rate is not supported, or can't be determined, 0.0 is returned.
The getWorldURL() method returns the URL for the root of the currently loaded world.
The replaceWorld() method replaces the current world with the world represented by the passed nodes. This will usually not return, since the world containing the running script is being replaced.
The loadURL method loads url with the passed parameters. Parameter is as described in the Anchor node. This method returns immediately but if the URL is loaded into this browser window (e.g. - there is no TARGET parameter to redirect it to another frame) the current world will be terminated and replaced with the data from the new URL at some time in the future.
The setDescription method sets the passed string as the current description. This message is displayed in a browser dependent manner. To clear the current description, send an empty string.
The createVrmlFromString() method takes a string consisting of a VRML scene description, parses the nodes contained therein and returns the root nodes of the corresponding VRML scene.
The createVrmlFromURL() instructs the browser to load a VRML scene description from the given URL or URLs. After the scene is loaded, event is sent to the passed node returning the root nodes of the corresponding VRML scene. The event parameter contains a string naming an MFNode eventIn on the passed node.
These methods respectively add and delete a route between the given event names for the given nodes.
Browsers that wish to add functionality beyond the capabilities in the specification should do so by creating prototypes or external prototypes. If the new node cannot be expressed using the prototyping mechanism (i.e. it cannot be expressed as VRML scene graph), then it should be defined as an external prototype with a unique URN specification. Authors who use the extended functionality may provide multiple, alternative URLs or URNs to represent the content to ensure that it is viewable on all browsers.
For example, suppose a browser wants to create a native Torus geometry node implementation:
EXTERNPROTO Torus [ field SFFloat bigR, field SFFloat smallR ] ["urn:inet:library:Torus", "http://.../proto_torus.wrl" ]
This browser will recognize the URN and use its own private implementation of the Torus node. Other browsers may not recognize the URN, and skip to the next entry in the URL list and search for the specified prototype file. If no URLs or URNs are found, the Torus is assumed to be a an empty node.
Note that the prototype name, "Torus", in the above example has no meaning whatsoever. The URN/URL uniquely and precisely defines the name/location of the node implementation. The prototype name is strictly a convention chosen by the author and shall not be interpreted in any semantic manner. The following example uses both "Ring" and "Donut" to name the torus node, but that the URN/URL, "urn:library:Torus, http://.../proto_torus.wrl", specify the actual definition of the Torus node:
#VRML V2.0 utf8
EXTERNPROTO Ring [field SFFloat bigR, field SFFloat smallR ]
["urn:library:Torus", "http://.../proto_torus.wrl" ]
EXTERNPROTO Donut [field SFFloat bigR, field SFFloat smallR ]
["urn:library:Torus", "http://.../proto_torus.wrl" ]
Transform { ... children Shape { geometry Ring } } Transform { ... children Shape { geometry Donut } }
VRML-compliant browsers must recognize and implement the PROTO, EXTERNPROTO, and URN specifications. Note that the prototype names (e.g. Torus) has no semantic meaning whatsoever. Rather, the URL and the URN uniquely determine the location and semantics of the node. Browsers shall not use the PROTO or EXTERNPROTO name to imply anything about the implementation of the node.
The Background, Fog, NavigationInfo, and Viewpoint nodes have the unique behavior that only one of each type can be active (i.e. affecting the user's experience) at any point in time. See "Grouping and Children Nodes" for a description of legal children nodes. The browser shall maintain a stack for each type of binding node. Each of these nodes includes a set_bind eventIn and an isBound eventOut. The set_bind eventIn is used to moves a given node to and from its respective top of stack. A TRUE value sent to set_bind eventIn, moves the node to the top of the stack, and a FALSE value removes it from the stack. The isBound event is output when a given node is moved to the top of the stack, removed from the stack, or is pushed down in the stack by another node being placed on top. That is, the isBound event is sent when a given node ceases to be the active node. The node at the top of stack, (the most recently bound node), is the active node for its type and is used by the browser to set world state. If the stack is empty (i.e. either the file has no binding nodes for a given type or the stack has been popped until empty), then the default field values for that node type are used to set world state. The results are undefined if a multiply instanced (DEF/USE) bindable node is bound.
Geometry nodes must be contained by a Shape node in order to be visible to the user. The Shape node contains exactly one geometry node in its geometry field. This node must be one of the following node types:
Box |
Cone |
Cylinder |
ElevationGrid |
Extrusion |
IndexedFaceSet |
IndexedLineSet |
PointSet |
Sphere |
Text |
Several geometry nodes also contain Coordinate, Color, Normal, and TextureCoordinate as geometric property nodes. These property nodes are separated out as individual nodes so that instancing and sharing is possible between different geometry nodes. All geometry nodes are specified in a local coordinate system and are affected by parent transformations.
The ccw field indicates whether the vertices are ordered in a counter-clockwise direction when the shape is viewed from the outside (TRUE). If the order is clockwise, this field value is FALSE and the vertices are ordered in a clockwise direction when the shape is viewed from the outside. The solid field indicates whether the shape encloses a volume (TRUE), and can be used as a hint to perform backface culling. If nothing is known about the shape, this field value is FALSE (and implies that backface culling cannot be performed and that the polygons are two-sided). If solid is TRUE, the ccw field has no affect. The convex field indicates whether all faces in the shape are convex (TRUE). If nothing is known about the faces, this field value is FALSE.
These hints allow VRML implementations to optimize certain rendering features. Optimizations that may be performed include enabling backface culling and disabling two-sided lighting. For example, if an object is solid and has ordered vertices, an implementation may turn on backface culling and turn off two-sided lighting. If the object is not solid but has ordered vertices, it may turn off backface culling and turn on two-sided lighting.
Interpolators nodes are designed for linear keyframed animation. That
is, an interpolator node defines a piecewise linear function, f(t),
on the interval (-infinity, infinity). The piecewise linear
function is defined by n values of t, called key,
and the n corresponding values of f(t),
called keyValue. The keys must be monotonic
non-decreasing and are not restricted to any interval. An interpolator
node evaluates f(t) given any value of t (via the set_fraction
eventIn).
Let the n keys k0, k1, k2, ..., k(n-1) partition the domain (-infinity, infinity) into the n+1 subintervals given by (-infinity, k0), [k0, k1), [k1, k2), ... , [k(n-1), infinity). Also, let the n values v0, v1, v2, ..., v(n-1) be the values of an unknown function, F(t), at the associated key values. That is, vj = F(kj). The piecewise linear interpolating function, f(t), is defined to be
f(t) = v0, if t < k0, = v(n-1), if t > k(n-1), = vi, if t = ki for some value of i, where -1<i<n, = linterp(t, vj, v(j+1)), if kj < t < k(j+1),
where linterp(t,x,y) is the linear interpolant, and -1< j < n-1. The third conditional value of f(t) allows the defining of multiple values for a single key, i.e. limits from both the left and right at a discontinuity in f(t).The first specified value will be used as the limit of f(t) from the left, and the last specified value will be used as the limit of f(t) from the right. The value of f(t) at a multiply defined key is indeterminate, but should be one of the associated limit values.
There are six different types of interpolator nodes, each based on the type of value that is interpolated:
ColorInterpolator |
CoordinateInterpolator |
NormalInterpolator |
OrientationInterpolator |
PositionInterpolator |
ScalarInterpolator |
All interpolator nodes share a common set of fields and semantics:
exposedField MFFloat key [...] exposedField MF<type> keyValue [...] eventIn SFFloat set_fraction eventOut [S|M]F<type> value_changed
The type of the keyValue field is dependent on the type of the interpolator (e.g. the ColorInterpolator's keyValue field is of type MFColor). Each value in the keyValue field corresponds in order to a parameterized time in the key field. Therefore, there exists exactly the same number of values in the keyValue field as key values in the key field.
The set_fraction eventIn receives a float event and causes the interpolator function to evaluate. The results of the linear interpolation are sent to value_changed eventOut.
Four of the six interpolators output a single-valued field to value_changed. The exceptions, CoordinateInterpolator and NormalInterpolator, send multiple-value results to value_changed. In this case, the keyValue field is an nxm array of values, where n is the number of keys and m is the number of values per key. It is an error if m is not a positive integer value.
The following example illustrates a simple ScalarInterpolator which contains a list of float values (11.0, 99.0, and 33.0), the keyframe times (0.0, 5.0, and 10.0), and outputs a single float value for any given time:
ScalarInterpolator { key [ 0.0, 5.0, 10.0] value [11.0, 99.0, 33.0] }
For an input of 2.5 (via set_fraction
), this ScalarInterpolator
would send an output value of:
eventOut SFFloat value_changed 55.0 # = 11.0 + ((99.0-11.0)/(5.0-0.0)) * 2.5
Whereas the CoordinateInterpolator below defines an array of coordinates for each keyframe value and sends an array of coordinates as output:
CoordinateInterpolator { key [ 0.0, 0.5, 1.0] value [ 0 0 0, 10 10 30, # 2 keyValue(s) at key 0.0 10 20 10, 40 50 50, # 2 keyValue(s) at key 0.5 33 55 66, 44 55 65 ] # 2 keyValue(s) at key 1.0 }
In this case, there are two coordinates for every keyframe. The first
two coordinates (0, 0, 0) and (10, 10, 30) represent the value at
keyframe 0.0, the second two coordinates (10, 20, 10) and (40, 50, 50)
represent that value at keyframe 0.5, and so on. If a set_fraction
value of 0.25 (meaning 25% of the animation) was sent to this
CoordinateInterpolator, the resulting output value would be:
eventOut MFVec3f value_changed [ 5 10 5, 25 30 40 ]
If an interpolator node's value eventOut is read (e.g. get_value) before it receives any inputs, then keyValue[0] is returned.
The location of an interpolator node in the scene graph has no affect on its operation. For example, if a parent of an interpolator node is a Switch node with whichChoice set to -1 (i.e. ignore its children), the interpolator continues to operate as specified (receives and sends events).
In general, shape nodes are illuminated by the sum of all of the lights in the world that affect them. This includes the contribution of both the direct and ambient illumination from light sources. Ambient illumination results from the scattering and reflection of light originally emitted directly by light sources. The amount of ambient light is associated with the individual lights in the scene. This is a gross approximation to how ambient reflection actually occurs in nature.
There are three types of light source nodes:
DirectionalLight |
PointLight |
SpotLight |
All light source node contain an intensity, a color, and an ambientIntensity field. The intensity field specifies the brightness of the direct emission from the light, and the ambientIntensity specifies the intensity of the ambient emission from the light. Light intensity may range from 0.0, no light emission, to 1.0, full intensity. The color field specifies the spectral color properties of the light emission, as an RGB value in the 0.0 to 1.0 range.
PointLight and SpotLight illuminate all objects in the world that fall within their volume of lighting influence regardless of location within the file. PointLight defines this volume of influence as a sphere centered at the light (defined by a radius). SpotLight defines the volume of influence a solid angle defined by a radius and a cutoff angle. DirectionalLights illuminate only the objects descended from the light's parent grouping node (including any descendant children of the parent group node).
A Shape node is unlit if any of the following are true:
If the shape is unlit, then the color (Irgb) and alpha (A, 1-transparency) of the shape at each point on the shape's geometry is given by the following table:
Unlit Geometry | Color per-vertex or per-face | Color NULL |
No texture | Irgb= ICrgb A = 1 |
Irgb= (1,
1, 1) A = 1 |
Intensity (one-component) texture | Irgb= IT
× ICrgb A = 1 |
Irgb=
(IT,IT,IT
) A = 1 |
Intensity+Alpha (two-component) texture | Irgb= IT
× ICrgb A = AT |
Irgb=
(IT,IT,IT
) A = AT |
RGB (three-component) texture | Irgb= ITrgb A = 1 |
Irgb= ITrgb A = 1 |
RGBA (four-component) texture | Irgb= ITrgb A = AT |
Irgb= ITrgb A = AT |
where:
AT = normalized (0-1) alpha value from
2 or 4 component texture image
ICrgb =
interpolated per-vertex color, or per-face color, from Color node
IT = normalized (0-1) intensity from
1-2 component texture image
ITrgb= color
from 3-4 component texture image
If the shape is lit (a Material and an Appearance node are specified for the Shape), then the Color and Texture nodes determine the diffuse color for the lighting equation, as specified in the following table:
Lit Geometry | Color per-vertex or per-face | Color NULL |
No texture | Odrgb
= ICrgb A = 1-TM |
Odrgb
= IMrgb A = 1-TM |
Intensity texture (one-component) | Odrgb
= IT × ICrgb A = 1-TM |
Odrgb
= IT × IMrgb A = 1-TM |
Intensity+Alpha texture (two-component) | Odrgb
= IT × ICrgb A = AT |
Odrgb
= IT × IMrgb A = AT |
RGB texture (three-component) | Odrgb
= ITrgb A = 1-TM |
Odrgb
= ITrgb A = 1-TM |
RGBA texture (four-component) | Odrgb
= ITrg A = AT |
Odrgb
= ITrgb A = AT |
where:
IMrgb
= material diffuseColor
Odrgb
= diffuse factor, used in lighting equations below
TM = material transparency
... and all other terms are as above.
An ideal VRML 2.0 implementation will evaluate the following lighting equation at each point on a surface. RGB intensities at each point on a geometry (Irgb) are given by:
Irgb= Ifrgb × (1 - s0) + s0 × ( Oergb + SUM( oni × attenuationi × spoti × Iiprgb × ( ambienti + diffusei + speculari )))
where:
· = modified vector
dot product: 0 if dot product < 0, dot product otherwise.
Ifrgb =
currently bound fog's color
Iiprgb
= light i color
Iia = light i
ambientIntensity
L =
(Point/SpotLight) normalized vector from point on geometry to light
source i position
L =
(DirectionalLight) -direction of light source i
N =
normalized normal vector at this point on geometry
Oa
= material ambientIntensity
Odrgb
= diffuse color, from material node, Color node, and/or Texture node
Oergb
= material emissiveColor
Osrgb
= material specularColor
V =
normalized vector from point on geometry to viewer's position
attenuationi =
max ( 1/(c1 + c2×dL
+ c3×dL²
), 1 )
ambienti = Iia
× Iiprgb
× Odrgb
× Oa
c1,c2,c3
= light i attenuation
dV = distance from point on geometry
to viewer's position, in world space
dL = distance from light to point on
geometry, in light's coordinate system
diffusei = kd
× Odrgb
× ( N
· L
)
kd = ks
= light i intensity
oni =
1 if light source i affects this point on the geometry
oni
= 0 if light source does not affect this geometry (if farther away
than radius for Point or SpotLights, outside of enclosing
Group/Transform for a DirectionalLight, or on field is FALSE).
shininess = material shininess
speculari = ks
× Osrgb
× ( N
· (( L+V)
/ |L+V|
) shininess×128
spoti = 1 | spotCutoffi >= pi/2 or light i is PointLight or DirectionalLight |
spoti = 0 | spotCutoffi < pi/2 and L · spotDiri < cos(spotCutoffi) |
spoti = ( L · spotDiri ) spotExponent*128 | spotCutoff < pi/2 and L · spotDiri > cos(spotCutoffi) |
spotCutoffi
= SpotLight i cutoff angle
spotDiri
= normalized SpotLight i direction
spotExponent = SpotLight i exponent
SUM: sum over all light sources i
s0 = 1 | no fog |
s0 = (fogVisibility-dV) / fogVisibility | fogType "LINEAR", dV < fogVisibility |
s0 = 0 | fogType "LINEAR", dV > fogVisibility |
s0 = exp(-dV / (fogVisibility-dV) ) | fogType "EXPONENTIAL", dV < fogVisibility |
s0 = 0 | fogType "EXPONENTIAL", dV > fogVisibility |
The VRML lighting equations are based on the simple illumination equations given in "Computer Graphics: Principles and Practice", Foley, van Dam, Feiner and Hughes, section 16.1, "Illumination and Shading", [FOLE], and in the OpenGL 1.1 specification (http://www.sgi.com/Technology/openGL/spec.html) section 2.13 (Lighting) and 3.9 (Fog), [OPEN].
There are several different kinds of sensor nodes: ProximitySensor, TimeSensor, VisibilitySensor, and a variety of pointing device sensors (Anchor, CylinderSensor, PlaneSensor, SphereSensor, TouchSensor). Sensors are children nodes in the hierarchy and therefore may be parented by grouping nodes, see "Grouping and Children Nodes".
The ProximitySensor detects when the user navigates into a specified invisible region in the world. The TimeSensor is a clock that has no geometry or location associated with it - it is used to start and stop time-based nodes, such as interpolators. The VisibilitySensor detects when a specific part of the world becomes visible to the user. Pointing device sensors detect user pointing events, such as the user activating on a piece of geometry (i.e. TouchSensor). Proximity, time, and visibility sensors are additive. Each one is processed independently of whether others exist or overlap.
The following nodes are considered to be pointing device sensors:
Anchor |
CylinderSensor |
PlaneSensor |
SphereSensor |
TouchSensor |
Pointing device sensors are activated when the user points to geometry that is influenced by a specific pointing device sensor. These sensors have influence over all geometry that is descendant from the sensor's parent group. [In the case of the Anchor node, the Anchor itself is considered to be the parent group.] Typically, the pointing device sensor is a sibling to the geometry that it influences. In other cases, the sensor is a sibling to groups which contain geometry (that is influenced by the pointing device sensor).
For a given user activation, the lowest, enabled pointing device sensor in the hierarchy is activated - all other pointing device sensors above it are ignored. The hierarchy is defined by the geometry node which is activated and the entire hierarchy upward. If there are multiple pointing device sensors tied for lowest, then each of these is activated simultaneously and independently, possibly resulting in multiple sensors activated and outputting simultaneously. This feature allows useful combinations of pointing device sensors (e.g. TouchSensor and PlaneSensor). If a pointing device sensor is instanced (DEF/USE), then the any geometry associated with any of its parents must be tested for intersection and activated hit.
The Anchor node is considered to be a pointing device sensor when trying to determine which sensor (or Anchor) to activate. For example, in the following file a click on Shape3 is handled by SensorD, a click on Shape2 is handled by SensorC and the AnchorA, and a click on Shape1 is handled by SensorA and SensorB:
Group { children [ DEF Shape1 Shape { ... } DEF SensorA TouchSensor { ... } DEF SensorB PlaneSensor { ... } DEF AnchorA Anchor { url "..." children [ DEF Shape2 Shape { ... } DEF SensorC TouchSensor { ... } Group { children [ DEF Shape3 Shape { ... } DEF SensorD TouchSensor { ... } ] } ] } ] }
Drag sensors are a subset of pointing device sensors. There are three drag sensors (CylinderSensor, PlaneSensor, SphereSensor) in which pointer motions cause events to be generated according to the "virtual shape" of the sensor. For instance the output of the SphereSensor is an SFRotation, rotation_changed, which can be connected to a Transform node's set_rotation field to rotate an object. The effect is the user grabs an object and spins it about the center point of the SphereSensor.
To simplify the application of these sensors, each node has an offset and an autoOffset exposed field. Whenever the sensor generates output, (as a response to pointer motion), the output value (e.g. SphereSensor's rotation_changed) is added to the offset. If autoOffset is TRUE (default), this offset is set to the last output value when the pointing device button is released (isActive FALSE). This allows subsequent grabbing operations to generate output relative to the last release point. A simple dragger can be constructed by sending the output of the sensor to a Transform whose child is the object being grabbed. For example:
Group { children [ DEF S SphereSensor { autoOffset TRUE } DEF T Transform { children Shape { geometry Box {} } } ] ROUTE S.rotation_changed TO T.set_rotation }
The box will spin when it is grabbed and moved via the pointer.
When the pointing device button is released, offset is set to the last output value and an offset_changed event is sent out. This behavior can be disabled by setting the autoOffset field to FALSE.
AudioClip, MovieTexture, and TimeSensor are time dependent nodes that should activate and deactivate themselves at specified times. Each of these nodes contains the exposedFields: startTime, stopTime, and loop, and the eventOut: isActive. The exposedField values are used to determine when the container node becomes active or inactive. Also, under certain conditions, these nodes ignore events to some of their exposedFields. A node ignores an eventIn by not accepting the new value and not generating an eventOut_changed event. In this section we refer to an abstract TimeDep node which can be any one of AudioClip, MovieTexture, or TimeSensor.
TimeDep nodes can execute for 0 or more cycles. A cycle is defined by field data within the node. If, at the end of a cycle, the value of loop is FALSE, then execution is terminated (see below for events at termination). Conversely, if loop is TRUE at the end of a cycle, then a TimeDep node continues execution into the next cycle. A TimeDep node with loop TRUE at the end of every cycle continues cycling forever if startTime >= stopTime, or until stopTime if stopTime > startTime.
A TimeDep node will generate an isActive TRUE event when it becomes active and will generate an isActive FALSE event when it becomes inactive. These are the only times at which an isActive event is generated, i.e., they are not sent at each tick of a simulation.
A TimeDep node is inactive until its startTime is reached. When time now is equal to startTime an isActive TRUE event is generated and the TimeDep node becomes active. When a TimeDep node is read from a file, and the ROUTEs specified within the file have been established, the node should determine if it is active and, if so, generate an isActive TRUE event and begin generating any other necessary events. However, if a node would have become inactive at any time before the reading of the file, then no events are generated upon the completion of the read.
An active TimeDep node will become inactive at time now for now = stopTime > startTime. The value of stopTime is ignored if stopTime <= startTime. Also, an active TimeDep node will become inactive at the end of the current cycle if loop = FALSE. If an active TimeDep node receives a set_loop = FALSE event, then execution continues until the end of the current cycle or until stopTime (if stopTime > startTime), whichever occurs first. The termination at the end of cycle can be overridden by a subsequent set_loop = TRUE event.
set_startTime events to an active TimeDep node are ignored. set_stopTime events, where set_stopTime <= startTime, to an active TimeDep node are also ignored. A set_stopTime event to an active TimeDep node, where startTime < set_stopTime <= now, result in events being generated as if stopTime = now. That is, final events, including an isActive FALSE, are generated and the node becomes inactive. The stopTime_changed event will have the set_stopTime value. Other final events are node dependent (c.f., TimeSensor).
A TimeDep node may be re-started while it is active by sending it a set_stopTime = now event (which will cause the node to become inactive) and a set_startTime event (setting it to now or any time in the future). Browser authors should note that these events will have the same time stamp and should be processed as set_stopTime, then set_startTime to produce the correct behavior.
The default values for each of the TimeDep nodes have been specified such that a node with default values became inactive in the past (and, therefore, will generate no events upon reading). A TimeDep node can be made active upon reading by specifying loop TRUE. This use of a nonterminating TimeDep node should be used with caution since it incurs continuous overhead on the simulation.