ARchi VR Actions

Revision Info: Documentation for ARchi VR Version 2.91 - April 2022

Table of Contents

Up to ARchi VR Content Creation

Up to ARchi VR Content Creation


An action contains two parts: a list of new items to be inserted into the scene by the do: add task (see Task section below), and a sequence of tasks that specify the behavior of new or existing items.

The Action data structure is used in the ARchi VR App

The following JSON data structure is an example of an Action (which could be the result of a service call):

{ "$schema": "", "items": [ { "id": "", "vertices": [ [ 0.0, 0.0, 0.0 ], [ 1.0, 0.0, 0.0 ] ], "type": "Route", "subtype": "Distance" } ], "tasks": [ { "do": "add", "id": "", "ahead": "-0.5 0.0 -1.0" } ] }


Item definitions are documented in ARchi VR Elements.

A good way to get a sample data structure of an item type is to create it interactively within ARchi VR and then analyze the generated JSON code (e.g., by using the inspector).


Tasks define the behavior in the AR scene and the UI of the App. They are descibed in the next chapter.


The following tasks are currently supported by ARchi VR:

Item-related tasks will modify the model and changes are saved. The do: add task adds a new item to the AR scene which is defined in the items list of the Action. All other tasks manipulate existing items in the running AR session.

[ { "do": "add", "id": "_", "at": "2.0 4.0 5.0" // absolute coordinates }, { "do": "add", "id": "_", "ahead": "-1.0 0.0 -2.0" // relative to user position and orientation }, { "do": "remove", "id": "_" }, { "do": "move", // move model and 3D "id": "_", "to": "2.0 4.0 5.0" // absolute }, { "do": "move", // move model and 3D "id": "_", "by": "0.1 0.5 0.3" // relative }, { "do": "turn", // turn model and 3D "id": "_", "to": "90.0" // absolute y rotation in degrees, clock wise }, { "do": "turn", // turn model and 3D "id": "_", "by": "-45.0" // relative y rotation in degrees, clock wise }, { "do": "spin", // turn model and 3D "id": "_", "to": "90.0" // absolute y rotation in radiants, counter clock wise }, { "do": "spin", // turn model and 3D "id": "_", "by": "-45.0" // relative y rotation in radiants, counter clock wise }, { "do": "tint", // set color of item "color": "_" // as name (e.g., "red") or as RGB (e.g., "#0000FF") }, { "do": "tint", // set background color of item "bgcolor": "_" // as name (e.g., "red") or as RGB (e.g., "#0000FF") }, { "do": "lock", "id": "_" }, { "do": "unlock", "id": "_" }, { "do": "change", // or "do": "classify" (is equivalent) "id": "_", "type": "Interior", // optional "subtype": "Couch", // optional "name": "Red Sofa", // optional "attributes": "color:red", // optional "content": "_", // optional "asset": "_", // optional "setup":, "_", // optional "children": "_" // optional }, { "do": "select", "title": "___", // menu title "id": "_", "field": "_", // item field: type, subtype, name, attributes, content, asset "values": "_value1;_value2;_value3", "labels": "_label1;_label2;_label3" // optional (when labels differ from values) }

For all tasks, x-y-z parameters are interpreted according to the right-handed rule of the coordinate system. An optional on parameter defines a relative reference point. By default, it is "on": "Floor", but can be set to "Ceiling", "Wall", "Item" (one that is ahead of user) or to an item id.

The "do": "add" task adds a new item to the scene. The id corresponds to an item in the items list, and the position is specified with an x-y-z coordinate by at (places the item in absolute world space) or ahead (places the item relative to the user's position and orientation). The height represented by the y-coordinate is relative to the floor by default, but can also be relative to the ceiling height or the top of an item depending on the aforementioned on parameter.

For example the following JSON code will place an item on the wall in front of the user at height 1. If no wall is ahead, then the item will be placed on the floor.

{ "do": "add", "id": "", "ahead": "0.0 1.0 0.0", "on": "Wall" }

This task will place the flower item on the table item:

{ "do": "add", "id": "", "at": "-0.75 0.0 -0.40", "on": "" }

The "do": "move" and "do": "turn" tasks manipulate the model elements and the corresponding 3D visualization nodes. The "do": "turn" task turns in degrees clockwise.

Visual-related tasks do change the visual representation of items in the AR scene but are not reflected or stored in the model. The id is first interpreted as item id, if nothing found then id is interpreted as name of a 3D node in the scene.

[ { "do": "pulsate", "id": "_" }, { "do": "depulsate", "id": "_" }, { "do": "billboard", "id": "_" }, { "do": "unbillboard", "id": "_" }, { "do": "highlight", "id": "_" }, { "do": "dehighlight", "id": "_" }, { "do": "hide", "id": "_" // item id or node name }, { "do": "unhide", "id": "_" // item id or node name }, { "do": "add", // add visual node "id": "_", // of item with id "to": "_" // to parent node name to which it will be added as (cloned) child }, { "do": "translate", // translate 3D only "id": "_", "to": "1.0 0.0 -1.0" }, { "do": "translate", // translate 3D only "id": "_", "by": "0.0 0.75 0.0" }, { "do": "rotate", // rotate 3D only "id": "_", "to": "3.141 0.0 0.0" // absolute coordinates in radians, counter clock wise }, { "do": "rotate", // rotate 3D only "id": "_", "by": "0.0 -1.5705 0.0" // relative coordinates in radians, counter clock wise }, { "do": "scale", "id": "_", "to": "2.0 2.0 1.0" // absolute scale factors }, { "do": "scale", "id": "_", "by": "0.5 0.5 0.0" // add relative to existing scale factor }, { "do": "occlude", // set geomety of 3D node as occluding but not visible "id": "_" // item id "node": "_" // name of 3D node (can be child node) }, { "do": "illuminate", "from": "camera" // default lightning }, { "do": "illuminate", "from": "above" // spot light from above with ambient }

The "do": "translate" and "do": "rotate" tasks only change the visual 3D nodes (and not the model item itself). The "do": "rotate" task rotates in radians and counter-clockwise as euler angles.


The "do": "animate" task creates a constant animation of the graphical node of an item (the model item itself stays unchanged). The id can also reference a node of an imported 3D geometry by name.

{ "dispatch": "onload", // run task after item is loaded into AR view "do": "animate", "id": "_", // item id or 3D node name "key": "position.y", "from": "1.0", "to": "1.5", "for": "1.5", // in seconds "repeat": "INFINITE", "reverse": "true" }

The key parameter specifies which variable of the 3D node will be affected by the animation. Possible key values are:

The from and to parameters are numeric values (as strings) between the animation will interpolate.

The dur parameter specifies either the basic duration or interval of the animation in seconds.

The repeat parameter may be a fractional number of how often the animation interval is repeated. If set to "INFINITE", the animation will repeat forever.

If the repeat parameter ist set to true then the repeated interval will run back and forth.

The "do": "stop" task removes an animation with the given key from the graphical node of an item.

{ "do": "stop", "id": "_", // item id or 3D node name "key": "position.y" }

An example of an animated action:

{ "$schema": "", "items": [ { "id": "", "type": "Geometry", "subtype": "Box", "attributes": "color:#FF0000; wxdxh:0.4x0.4x0.7" } ], "tasks": [ { "do": "add", "id": "", "ahead": "-0.2 0.0 -1.0" }, { "do": "animate", "id": "", "key": "eulerAngles.y", "from": "0.0", "to": "3.14", "for": "1.5", "repeat": "INFINITE", "reverse": "true" }, { "do": "stop", "id": "", "key": "eulerAngles.y", "in": "9.0" } ] }

The following tasks are used to control the user interface of the AR view in the ARchi VR app.

{ "do": "enable", "system": "Sonification" }, { "do": "disable", "system": "Sonification" }, { "do": "enable", "system": "Voice Guide" }, { "do": "disable", "system": "Voice Guide" }, { "do": "enable", "system": "Occlussion" // people occlussion in AR view }, { "do": "disable", "system": "Occlussion" }, { "do": "enable", "system": "ML" }, { "do": "disable", "system": "ML" }, { "do": "enable", "system": "Depth" // enable depth map }, { "do": "skip", "state": "walls" // skip wall capturing }, { "do": "skip", // Instant AR mode "state": "floor" // skip floor and wall capturing }, { "do": "raycast", // create raycast hit(s) "id" "_" // "" (target icon) or overlay id from where ray is sent (e.g. from BBoxLabel) }, { "do": "snapshot" // take photo shot }, { "do": "screenshot" // take screen snapshot }, { "do": "say", "text": "Good morning" }, { "do": "play", "snd": "1113" // system sound ID }, { "do": "stream", "url": "https://____.mp3", "volume": "0.5" // NOTE: volume setting is currently not working }, { "do": "loop", "url": "https://____.mp3", "volume": "0.5" // NOTE: volume setting is currently not working }, { "do": "pause" // pause current stream }, { "do": "prompt", // show instruction in a GUI popup window "title": "_", "text": "_", "button": "_", // optional, default is "Continue" "img": "_URL_", // optional, image URL }, { "do": "confirm", // get a YES/NO response as confirmation via an GUI popup window "dialog": "Do you want to ...?", "then": "function(...)", "else": "function(...)", // optional "yes": "Option 1", // optional, default "Yes" "no": "Option 2" // optional, default "No" }, { "do": "inspect", // show inspector with context information of curreent AR session "key": "_" // optional, key as filter, e.g. "data" }, { "do": "position", // open xyz pad controller for positioning item "id": "_" }, { "do": "resize", // open wxhxd pad controller for resizing item "id": "_" }, { "do": "set", // set text label of AR view UI "id": "UI._", // UI.status, UI.warning,, "value": "_text" }, { "do": "clear", "unit": "UI", // clear text labels of AR view UI and hide taget image } ]

The "do": "say" task will use text-to-speech technology to read the given text aloud.

The "do": "play" task needs a SystemSoundID number for the snd parameter. See the list of Apple iOS System Sound IDs for more information.

The following tasks are used to control data in the ARchi VR app.

{ "do": "set", // set a string variable "data": "_varname", // field name, key "value": "_" // value as a string }, { "do": "assign", // set a numeric variable "value": "_", // MUST be a string which will be converted to an integer or a float value "data": "_varname" // field name, key }, { "do": "select", "title": "___", // menu title "data": "_varname", "values": "_value1;_value2;_value3", "labels": "_label1;_label2;_label3"s // optional (when labels differ from values) } { "do": "concat", // concat to a string variable "data": "_varname", // field name, key "with": "_" // result from an expression evaluated by AR context, use single quotes for 'string' }, { "do": "eval", // set a variable by evaluating an expression "expression": "_", // value as a string or as result from an expression evaluated by AR context "data": "_varname" // field name, key }, { "do": "fetch", // fetch data from remote and map to internal data "parameters": "$0 =; $1 = location.countryCode", "url": "https://___/do?city=$0&country=$1", "map": "data.var1 = result.key1; data.var2 = result.key2;" }, { "do": "clear", "unit": "data", // clear all data entries }, { "do": "clear", "unit": "temp", // clear all files in temporary directory }, { "do": "clear", "unit": "cache", // clear all cached requests }, { "do": "clear", "unit": "3D", // clear unused cached 3D models }

The following tasks are used to control the application logic or the sequence of Actions in the ARchi VR app.

{ "do": "save" // save the AR scene }, { "do": "avoid save" // disable save and save dialog }, { "do": "exit" // exit AR view without saving }, { "do": "execute", "op": "function(...)" }, { "do": "service", // start service with id "id": "_" }, { "do": "workflow", // start workflow with id "id": "_" }, { "do" : "request", "url": "" }, { "do": "clear", "unit": "tasks" // clear all dispatched tasks }

The "do": "execute" task runs functions which are explained in the Interaction chapter below.

During an AR session new content can be requested by the "do" : "request" task. The result of the request is an Action in JSON which will be executed by the ARchi VR App as specified.

{ "do" : "request", "url": "" }

The "do" : "request" task may upload multi-part POST data with its URL request. See Dynamic Content by Server-side Application Logic for details on supported POST data transfer modes.

{ "do" : "request", "url": "", "upload": "POST:CAM,POST:CONTEXT" }

If "upload" is not defined or empty, the request is run in GET:JSON data transfer mode.

It is possible to define URL parameters using value mapping from Run-time Data such as:

{ "do" : "request", "parameters": "$0 = data.var; $1 = dev.screen.width", "url": "$0&w=$1", "upload": "POST:USER" }

The following tasks are used to install detectors in the ARchi VR app for scene understanding.

{ "do" : "detect", "img": "", "width": "2.14", // width in meter of real world image "height": "1.56", // height in meter of real world image "id": "_", // id of item that will be added on detection "op": "function('_', '_')" // optional, functions run after detection }, { "do" : "detect", "text": "_text", // regex "op": "function('Oh, a cool _text_', 'say')" }, { "do" : "halt", // halt (deinstall) detectors "id": "detected.text._text" // id of detector to }

Sample Actions

Example 1 - Place a warning on the floor in front of the user:

{ "$schema": "", "items": [ { "id": "", "vertices": [ [ 0.0, 0.0, 0.0 ] ], "type": "Spot", "subtype": "Warning", "name": "Attention" } ], "tasks": [ { "do": "add", "id": "", "ahead": "0.0 0.0 -1.0" } ] }

Example 2 - Place a 3D object in front of the user:

{ "$schema": "", "items": [ { "id": "", "asset": "", "attributes": "model: A-Table;wxdxh: 2.35x0.88x0.71;scale: 0.01;", "type": "Interior", "subtype": "Table" } ], "tasks": [ { "do": "add", "id": "", "ahead": "-0.5 0.0 -1.5" } ] }

Example 3 - Place a text panel at a fix position and turn it to 90 degrees:

{ "items": [ { "id": "", "type": "Spot", "subtype": "Panel", "vertices": [ [ 0.0, 0.0, 0.0 ] ], "asset": "<b>Hello</b><br>Welcome to ARchi VR.<br><small><i>Augmented Reality</i> at its best.</small>", "attributes": "color:#DDCCFF; bgcolor:#333333DD; scale:2.0" } ], "tasks": [ { "do": "add", "id": "", "at": "1.0 0.75 -2.0" }, { "do": "turn", "id": "", "to": "90.0", } ] }

See chapter Examples of Service Extensions for more sample code.

Dynamic Behavior using Active Rules

Active rules specify how events drive the dynamic responses within an AR view. ARchi VR runs a state machine that dispatches the triggering of events. When an event occurs, the condition in active rules are checked for and evaluated; if the condition exist and meets pre-established criteria, the appropriate task is executed.

Active Rule: Event - Condition - Task

Each task can be turned into an active rule consisting of these three parts:

Event: specifies when a signal triggers the invocation of the rule

Condition: a logical test that queries run-time data of the current AR session

Task: invocation of task

Temporal Tasks and Task Sequences

The execution of tasks can be temporally controlled with the in parameter which defines the delay in seconds. If the in parameter is not specified, the task will execute immediately.

{ "do": "remove", "id": "_", "in": "2.5" }

Each task can be placed on the timeline so that time-controlled sequences of tasks can be defined.

Run-time Data

The following data elements are accessible in the if expression, in value calculations (see Predicates and Expressions), and in parameter mappings.

// space/room data floor // floor element walls // array of wall elements cutouts // array of cutouts (doors, windows) items // array of items links // array of document links data // temporary variables // location location.address location.state location.countryCode location.postalCode location.longitude location.latitude // local date and time of device as integer number time.year time.month // 1-12 // 1-31 time.hour // 0-11 or 0-23 time.min // 0-59 time.sec // 0-59 time.runtime // runtime of AR session in seconds (float) time.hms // string of hours:minutes.seconds formatted as HH:MM:SS // date as localized string time.wddate // date as localized string with week day abbreviation // app info app.versionString // App version as string app.versionNumber // App version as numeric value // device info dev.type // Phone, Tablet, HMD dev.use // held (hand-held), worn (HMD) dev.screen.width dev.screen.height dev.screen.scale // scale factor of screen resolution : pixels dev.arview.width dev.arview.height dev.arview.scale // scale factor of screen resolution : pixels dev.cores // CPU cores dev.mem.used // memory used by app in MB // total device memory in MB dev.heat // thermal state from 0 (= nominal) to 4 (= critical) // device position & orientation held or worn by user user.pos.x // in meters user.pos.y user.pos.z user.rot.x // euler angles user.rot.y user.rot.z // unique user id when using iCloud // value from settings user.organissation // value from settings user.usesMetric // bool value from settings user.usesAudio // bool value from settings user.usesSpeech // bool value from settings // mathematical constants const.pi // const.e // exponential growth constant const.phi // golden ratio const.sin30 const.sin45 const.sin60 const.tan30 const.tan45 const.tan60

Data variables may also be used in the precondition of dynamically loaded extensions, e.g., triggered by user-interaction with "content": "ontap=function('https://___', 'getJSON')".

{ ... "preCondition": "data.done == 1" }, { ... "preCondition": "data.var1 > 5.0" }, { ... "preCondition": "data.var2 == 'hello'" }

Conditional Tasks

Conditional tasks only execute when their condition is true. Condtional triggers can be used for:

A condition is defined in the if expression. The task will only execute when the condition is evaluated as true. In case the if expression is not defined it evaluates by default to true. For if-then-else statements use two conditional tasks with complementary conditions.

{ "dispatch": "stated", "if": "walls.@count == 1", "do": "say", "text": "Add next walls of room." }

Dispatch Modes

The dispatch parameter defines the scope when a task will be executed. Some dispatch modes do control how the if expression of a conditional task is triggering the task. Possible dispatch modes are:

"dispatch": "atstart", // immediately at start of loading the action or session "dispatch": "once", // only once within the task sequence, is default value when not defined, execution starts after capturing is stable "dispatch": "onchange", // on each change of the space model "dispatch": "always", // several times per seconds (~5 times per second) "dispatch": "repeated", // like "always", but only triggered "each" seconds "dispatch": "stated", // like "always", but task only is triggered once when if-condition result is altered from false to true "dispatch": "steady", // like "stated", but task only is triggered when condition result stays true "for" a certain time in seconds. Use "reset" (restart) after x seconds to set state back to false, otherwise only triggered once (default). "dispatch": "activated", // like "stated", but task always is triggered when if-condition result becomes true "dispatch": "altered", // like "stated", but task always is triggered when if-condition result is altered from false to true or from true to false "dispatch": "onload", // after loading 3D item to AR view, e.g., to start animation or occlude node "dispatch": "atstop", // before session ends

Some examples of dispatched tasks:

{ "dispatch": "atstart", "do": "disable", "system": "Voice Guide" }, { "dispatch": "repeated", "each": "60", // seconds "do": "say", "text": "Another minute." }, { "if": "walls.@count == 1", "dispatch": "stated", "do": "say", "text": "Add next walls of room." }, { "if": "function('id', 'proximity') < 1.2", "dispatch": "stated", "do": "execute", "op": "function('https://___', 'getJSON')" // run an action }, { "if": "function('', 'gazingAt') == 1", "dispatch": "steady", "for": "2.5", // seconds "do": "remove", "id": "" }, { "if": "function('', 'gazingAt') == 1", "dispatch": "steady", "for": "2.0", // seconds "reset": "5.0", // reset/restart after seconds "do": "play", "snd": "1113" }, { "if": "data.isON == 1", "dispatch": "altered", "do": "setValue", "expression" : "time.runtime > 3.0", "id": "" }

Instant AR

If your content is not dependent on a stable floor, e.g., when only using a Detector to augment a scene, then you can skip floor detection and immediately start executing the other tasks.

{ "dispatch": "atstart", "do": "skip", "state": "floor" // skip floor and wall capturing }

Function Calls

Most of the tasks are also available as function calls which can be used for scripting. Functions can be sequenced by separating with a semicolon ';'.

Task Function Calls
{"do": "snapshot"} function('snapshot', 'take');
{"do": "unlock", "id": "_ID"} function('_ID', 'unlock');
{"do": "say, "text": "Hello"} function('Hello', 'say');
{"do": "add, "id": "_ID", "at": "[1,0,2]"} function('_ID', 'add'); function('_ID', 'moveto:::', 1, 0, 2);
... ...

Functions are available for more complex behavior, such as

See AR functions for details on predicates and expressions within ARchi VR.

DeclARe versus Scripting

Be aware that functions are not validated by the declARe JSON schema and are prone to errors (including crashes). Especially the escaping of strings with double ticks, single ticks, and back quotes (single left ticks) is challenging. Therefore use declarative tasks whenever possible and only use functions where needed.

Anyway it is possible to mix both approaches by calling functions from within tasks, as well as running (loaded) tasks by a function call.

Task calling function Function calling action (with included tasks)
{"do": "execute", "op": "function(_)"} function('https://_.json', 'getJSON')


The interactive behavior of items is specified in the content declaration such as "content": "Hello". If the content parameter is set (is not empty), the visual representation of the item will pulsate to depict its interactive status.

Content Declaration Type Interaction Result
"simple message text" inline single line text message opens message popup
"<h1>Title</h1> ..." inline rich-formated multi-line text opens text popup
"https://___" Web content such as HTML, JPG, SVG, ... opens Web view popup
"ontap=function(_);...;" on tap event listener executes functions on tap
"onpress=function(_);...;" on press event listener executes functions on long press
"ondrag=function(_);...;" on drag event listener executes functions on drag

You may use data variables to manage state in user interaction, e.g. as in "content": "ontap=function('data.done', 'assign:', 1);function(...)"

Rich-Text Popup

A popup presenting multi-line rich text that is held in "content".

Event Listeners

The "content": "ontap=___" declaration makes it possible to attach a click listener to items. An event listener can execute multiple functions, separated with a semicolon (;). If an interactive item is tapped, the function(s) will be executed in the sequence order. The "content": "onpress=___" event listener is triggered by a long press on the item.

You can also intall drag event listeners by a "content": "ondrag=___" declaration. See examples in Interactive Data Item.

Hint: Only one event listener can be installed on an item.

Dynamic Sequence of AR Scenes

The following interactive triggers are examples of function calls for requesting actions via an URL with different POST contents:

"content": "ontap=function('https://___', 'getJSON')" "content": "ontap=function('https://___', 'postUser')" "content": "onpress=function('https://___', 'postSpace')"

The result of these requests must be an action as JSON which will then be executed. With this approach AR sessions can guide through a sequence of AR scenes, each scene enhancement defined by its own action (consisting of new items and corresponding tasks).

Interaction Icons

There are default icons available to create interactive buttons using image panels. The base URL to these icon images is

File Name Icon Usage
next.png next, forward, go, start, choose
back.png back, backward
up.png up
down.png down
plus.png plus, add
minus.png minus
start.png start
stop.png stop, end
info.png show info / instruction in pop-up window
help.png help
fix.png fix, repair
docu.png show document / web page in pop-up window
msg.png show text message in small pop-up window
play.png stream audio
talk.png speech, say something (e.g., using text-to-speech or audio)
yes.png yes, done, ok
no.png no, cancel, delete
1.png 1, one
2.png 2, two
3.png 3, three
4.png 4, four
5.png 5, five
6.png 6, six
7.png 7, seven
8.png 8, eight
9.png 9, nine
more.png more

Of course, you are free to provide your custom icon images using your own web server.

Scene-specific Catalogs

Set default filter and/or category of shown catalog elements with:

{ "do": "filter", "id": "UI.catalog", "term": "_", // filter term "category": "Any" // "Any", "Interior", "Equipment", "Commodity" }

Open catalog pop-up window:

{ "do": "open", "id": "UI.catalog" }

Install dynamically a new (AR scene specific) catalog element which will not permanently be installed in the app:

{ "do": "install", "id": "UI.catalog._", // add name of catalog item "name": "Product XYZ", "category": "Interior", // "Interior", "Equipment", "Commodity" "subtype": "Table", "tags": "design furniture office table", "imgURL": "", "modelURL": "", "wxdxh": "1.75x0.75x0.76", "scale": "0.01" }

Scene-specific Services

Set a default filter of shown services with:

{ "do": "filter", "id": "UI.service", "term": "_" // filter term }

Open AR service pop-up window:

{ "do": "open", "id": "UI.service" }

Install dynamically a new (AR scene specific) service which will not permanently be installed in the app:

{ "do": "install", "id": "UI.service._", // add unique name of service "name": "Do It Service", "desc": "special action", "orga": "Company Inc.", "logoURL": "", "preCondition": "", "op": "function(...)" }

If only existing items are manipulated, you may call functions directly in op. More complex services can call external Actions with get/post such as:

"op": "function('https:___', 'getJSON')"

Scene-specific Help

Install dynamically an AR scene specific help :

{ "do": "install", "id": "", // must start with "" "url": "https://___.png" // URL to image with wxh ratio = 540x500 pixels }

Open AR service pop-up window:

{ "do": "open", "id": "" }

Uninstall Scene-specific UI Elements

Uninstall transient, scene-specific UI elements with:

{ "do": "uninstall", "id": "UI._" }

UI Dialog

A GUI alert popup for presenting info to the user.

{ "do": "prompt", "title": "_", "text": "_", "button": "_", // optional, button text, default is "Continue" "img": "_URL_" // optional, image URL }

A GUI alert for getting a YES/NO response as confirmation.

{ "dispatch": "onchange", "if": "items.@count == 20", "do": "confirm", "dialog": "Would you like to save the scene?", "then": "function('scene', 'save')", "else": "function('once', 'vibrate')", // optional "yes": "Yes", // optional confirmation text "no": "No" // optional cancel text }

Dialog by Function Call

The prompt: function shows a window with a title and a text and can for example be used to present instructions:

"content": "ontap=function('Title', 'prompt:', 'Message text')"

The confirm: function opens a dialog window to get a user's decision for executing activities (embedded function calls):

"content": "ontap=function('Will you do this?', 'confirm:', 'function(`___`, `___`)'" // if confirmed then execute "content": "ontap=function('Will you do this?', 'confirm::', 'function(`___`, `___`)', 'function(`___`, `___`)'" // if confirmed then execute first else execute second function(s)

Hint: Do NOT miss the colon in confirm:, otherwise no window appears. Do NOT miss the two colons in confirm:: for the yes-no seelection dialog, otherwise no dialog appears.

For function calls embedded in a function call, use back quotes ` (single left ticks) for their string parameters.

UI Controller in Service Selector

Install UI controllers to be shown in the Service selector. These UI controllers do change data variables. Use tasks with "dispatch": "always" or "dispatch": "altered" in order to react on changed values.


{ "do": "install", "id": "UI.service.switch._", // add unique name "name": "_", // controller label "orga": "net.metason.UI-Test", // optional: used for filtering "var": "data.var", // variable that is changed by switch "preCondition": "" // optional }


{ "do": "install", "id": "UI.service.slider._", // add unique name "name": "_", // controller label "desc": "_", // optional: sub label "var": "data._var", // variable that is changed by slider "ref": "min:-10; max:10" // slider settings }


{ "do": "install", "id": "UI.service.stepper._", // add unique name "name": "_", "var": "data._var", // variable that is changed by stepper "ref": "min:0; max:5; step:0.5;" // stepper settings }

More Examples of Actions

Interactive Tour through Samples of AR Items

The Welcome curation as well as the extensions listing contains "Samples of AR Items" as an interactive tour through some examples demonstrating Actions which create items and even add some behavior.


Source Code Used in Samples of AR Items

Study the source code of "Samples of AR Items" listed below to learn how to build your own extension. It is referenced by the URL

The extension resource ext.json for "Service Samples" contains two app extensions:

  1. for the Service itself starting with start.json and
  2. for a Workflow that adds an attachment with a link to this documentation (docu.json ) if the session will be saved and contains sample elements.

The base URL to the JSON source code files and referenced content files is

The actions of "Samples of AR Items" are listed below and cover key concepts of the Actions used in extensions for the ARchi VR App. Each action does call the next action by executing "content": "ontap=function('https://___', 'getJSON')" when the user taps on the interactive "forward" button:

Sample Code of App Extensions

Check out Examples of App Extensions, especially the code in Test Curation, which contains several test cases demonstrating the functionality of the ARchi VR App.


Outlook to upcoming features in a future release of ARchi VR:

New Tasks

{ "do": "enable", "system": "Physics" }, { "do": "disable", "system": "Physics" }

New Functions

"function(_num, 'newfunc')" // ;-) tbd

New UI Controller Services

Selector (Menu)

{ "do": "install", "id": "UI.service.selector._", // add unique name of service "name": "_", "var": "data._var", // variable that is changed by UI controller "values": "_label1;_label2;_label3", "labels": "_label1;_label2;_label3", // selection labels "preCondition": "" }

Voice-based Dialog

A voice-based dialog for getting a YES/NO response via speech recognition.

{ "dispatch": "onchange", "if": "function('id', 'proximity') < 1.2", "do": "ask", "question": "Can I give you a hint?", "then": "function('Tap on that pulsating item.', 'say')" }


Staging of Mixed Reality Scene

Put a 3D scene into place which will appear on entering its zone. Try to put the stage in a real place without obstacles, so that the user can unobstructedly walk within the stage area.

{ "do": "put", "id": "_id", // item id, e.g., a Group item with children "on": "stage", "at": "_x _y _z_" }

Put a 3D scene into place behind a portal at an opening (door or window). Will appear on getting close to the opening and is only seen through the cutout: walls do occlude the scene and the user has therefore go through the door to fully experience the scene.

{ "do": "put", "id": "_id", "behind": "portal", "at": "_id // cutout id to place the portal at }

Typically the zones in front and behind doors have open spaces of some square meters, so that MR scenes can be crossed through without barriers.


Back to ARchi VR Content Creation

Copyright © 2020-2022 Metason - All rights reserved.