Revision Info: Documentation for ARchi VR Version 3.2 - November 2023
Table of Contents
Up to ARchi VR Content Creation
Up to ARchi VR Content Creation
Creating AR experiences poses additional challenges compared to designing virtual reality (VR) and 3D content (e.g., video games). When creating VR/3D scenes, designers are in control of the virtual world they are building (even if it’s programmatically generated), thus taking a sort of ”god role”. AR experiences take place in the uncontrolled real world, and scene understanding algorithms detect the user’s spatial context. The AR experience is then driven by elements detected in the real world, without having control over their occurrence and timing during the creation process.
ARchi VR is supporting declarative programming to build AR experiences. „declARe“ is the declarative programming approach for the specification of AR experiences in ARchi VR. The content of an AR scene is a composition of virtual 3D items that are placed into the spatial context of the running AR session.
In an imperative approach, dynamic behavior would be realized by calling functions or object methods. This is not the way to go with „“declARe“. Instead the dynamic behavior is driven by events. You therefore need to define the expected reaction on events that can happen in an AR session. These reactions are expressed as active rules based on Event-Condition-Task triples.
Whenever the ARchi VR App should perform a task, an event has to trigger its execution. The occurence of an event is an observation saying that “something has happened”. Events do not automatically trigger a reaction (or side-effect). An active rule needs to bind an event type to a specific task execution. In ARchi VR this is realized as an "Active Rule" which is expressed as Event-Condition-Action triple. Active rules are used to signal the App that there is specific work to be done which will change the internal state of the system.
Active rules are processed asynchronously to avoid coordination and waiting. In such a reactive programming approach there is no explicit control over time-ordered execution.
To design reactive systems, breaking down the system’s behavior into discrete events, conditions, and actions provides a structured and modular approach. An event is a signal that something has occurred, such as the start of an AR session (
on:start), a user tapping on an item (
on:tap), or the detection of an image marker (
For a compact representation of active rules a diagram consisting of rule-reaction blocks is used. In the ARchi Composer such diagrams can be generated from "declARe" code. The first line shows the active rule as Event-Condition-Action triple. The blockquoted line after the rule is depicting the changed state as reaction:
changed state as reaction
The following example of an active rule is triggered by a temporal event (in 20 secs). If no item is found in the current AR session (the condition), the reaction will be voice feedback (the reaction) using the internal Text-To-Speech system:
"you may add an item" 🗣
Immediate execution of tasks or function calls at invocation is standard behavior and does not need any special condition handling. Default AR events are common triggers driven by the AR session, such as on:start, on:error, on:stop. If no condition is defined, it evaluates to true and the diagram shows an immediate execution arrow (→):
changed state as reaction
Cascading reactions are presented as intended blockquotes:
Action ← response ••• https://service.metason.net/ar/doit.json
on:ommand → do:set
data.val = 0
in:5 → do:set
data.val = 5
An event is a signal that something has happened. Events are generated by a producer caused and triggered by various circumstances. Within an Augmented Reality session of ARchi VR the following event types may happen:
|Event Type||Producer||Cause||Time Resolution|
|Session Event||AR Session||Change of session state →
|Invocation Event||Command Initiation or Function Call||Invocation of task →
|Detection Event||Installed Detector||Discovery of designated entity →
||100 - 500 ms|
|User Event||App User||User interaction →
|Temporal Event||Time Scheduler||Elapsed time in seconds reached →
|Data-driven Event||Data Observer & Context Dispatcher||Observed change of key-value in data model →
|Response Event||Remote Request||Async response of REST API call →
||20 - 5’000 ms|
|Notification Event||Subscribed System: Bonjour or SharePlay||Received notification during collaboration →
||50 - 250 ms|
on:start: immediately after start of AR session or after loading action
on:locating: on locating in the world (by GPS, by SLAM device positioning) – on:stable: when spatial registration of AR device gets stable
on:load: after loading 3D item to AR view, e.g., to animate or occlude node – on:stop: before AR session ends
on:command: on command initiation
on:call: on function call
on:tap: when tapped on item
on:press: when long-pressed on item
on:drag: when dragging an item
on:select: when selected from options of pop-up menu
on:dialog: when selected from options of pop-up dialog panel
on:poi: when selected a point of interest in a map or minimap
in:time: when elapsed time in seconds is reached
as:always: several times per seconds
as:repeated: like as:always, but only triggered each seconds
By using a state machine, both value changes and state transitions can generate data-driven events, taking into account previous values. This dynamic triggering turns ECA to active rules.
on:change: on each change of data value
as:stated: like as:always, but action only is triggered once when if-condition result is altered from false to true
as:steady: like as:stated, but action only is triggered when condition result stays true for a certain time in seconds.
as:activated: like as:always, but action always is triggered when if-condition result becomes true
as:altered: like as:always, but action always is triggered when if-condition result is altered from false to true or from true to false
on:response: on receiving response from request
on:error: on error of handling request
on:detect: on detecting occurrence of depicted type
on:track: on tracked changes in occurrence of depicted type
on:voice: on voice command from speech recognition system
on:enter: on enter of participant in collaboration session
on:message: on message from participant in collaboration session
on:leave: on leave of participant in collaboration session
AR patterns serve as a valuable means of communicating proven, reusable solutions to recurring design problems encountered during AR development.
The dynamic behavior of an AR experience is determined by its ECA rules, which are triggered by events occurring in the actual real-world context. The following table lists common behavioral patterns in AR that result from ECA rules.
|Immediate Reaction Pattern||Direct execution of task triggered by invocation event||Immediate, singular command of task or function call|
|Timed Reaction Pattern||Temporally executed action||Delayed action, timed action sequence|
|Conditional Reaction Pattern||Execute an action only when a condition is fulfilled after being triggered by event||State-driven, asynchronous programming logic|
|Continous Evaluation Pattern||Continous polling of state change||Existence check, visibility check, proximity check, repeated update checks|
|Publish-Subscribe Notification Pattern||Receive notifications via a message queue from a subscribed system||In FaceTime/SharePlay call, in Bluetooth connection, in WebSocket/WebRTC session|
|Request Response Pattern||Remote procedure call resulting in asynchronously receiving ECA rules or media assets||REST API call via a Web URL to load rules or assets (images, 3D models), e.g.,
|Chain Reaction Pattern||Course of events processed as sequence of indirect reactions||Rule changing data that will trigger a rule to update an item’s visual as a follow-up|
|Complementary Reactions Pattern||Two reactions with opposite result||Reacting on toggling states with two complementary active rules|
|Detector Reactivation Pattern||Reactivate detector with only once reaction||Reactivate detector after resulting augmentation is no longer existing|
While a VR/3D designer is placing virtual objects using positions in a controlled world coordinate system, an AR content creator primarily specifies object placement intents relative to appearing anchors, which are dynamically produced by detectors. These spatial anchors serve as reference points for pinning objects. Generally in AR Patterns, the augmentation intents are formulated as ECA rules that are triggered by detector events. When a detector event occurs, ECA rule’s reaction will add augmentation items to the AR scene.
The following table outlines several common placement intents for event-driven augmentation patterns that can be used to stage AR experiences. In AR, the real world serves as the spatial context for the stage, making users both spectators and performers. Their movements and perspectives influence the firing of events, leaving limited control over time and space for AR scenography (in contrast to film, theater, and VR/3D/game design).
|Geolocated Remark Pattern||Triggering of action or of user feedback based on GPS location data (long/lat) or address data (city, street, ...)||Visual or audio feedback in standard UI about location-based point of interest|
|Segment Overlay Pattern||Presentation of 2D overlay on top of image segment detected in video stream||Attaching 2D text description to a detected image segment|
|Area Enrichment Pattern||Approximately placing 3D content at area of image segment||Presenting ballons in sky area|
|Captured Twin Pattern||Captured element of real world added as 3D model||Captured walls, doors and windows in an indoor AR session|
|Anchored Supplement Pattern||Presentation of 3D content aligned to detected entity for enhancement||Attaching visual 3D elements to a detected image (marker) or captured object|
|Superimposition Pattern||Presentation of 3D content replacing a detected entity||(Re-)Place a detected image with another one|
|Tag-along Pattern||Presentation of 3D content within user’s field of view while head-locked||Place interactive 3D elements that follow the user|
|Hand/Palm Pop-up Pattern||Presentation of 3D content on palm of hand while visible||Place 3D UI elements at palm of user's one hand|
|Ahead Staging Pattern||Presentation of 3D content ahead of user||Placing 3D item on floor infront of spectator|
|Pass-through Portal Pattern||Presentation of partly hidden 3D content to force user to go through||Placing 3D scene behind a portal / behind an opening|
|Staged Progression Pattern||Ordered, linear story: temporal order or interaction flow of 3D presentations||Sequence of 3D presentations with forth and optionally back movements|
|Attention Director Pattern||Guide user’s attention to relevant place||Use animated pointers to direct user’s attention|
|Contextual Plot Pattern||Spatio-temporal setting that aggregates diverse AR patterns to form a non-linear plot||Scenography of dynamic, interactive, and animated AR|
For more details on AR Patterns see github.com/ARpatterns.