28 KiB
%%% Title = "XR Fragments" area = "Internet" workgroup = "Internet Engineering Task Force"
[seriesInfo] name = "XR-Fragments" value = "draft-XRFRAGMENTS-leonvankammen-00" stream = "IETF" status = "informational"
date = 2023-04-12T00:00:00Z
author initials="L.R." surname="van Kammen" fullname="L.R. van Kammen"
%%%
.# Abstract
This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like URI Fragments and visual-meta.
{mainmatter}
Introduction
How can we add more features to existing text & 3D scenes, without introducing new dataformats?
Historically, there's many attempts to create the ultimate markuplanguage or 3D fileformat.
However, thru the lens of authoring their lowest common denominator is still: plain text.
XR Fragments allows us to enrich existing dataformats, by recursive use of existing technologies:
- addressibility and navigation of 3D scenes/objects: URI Fragments + src/href spatial metadata
- hasslefree tagging across text and spatial objects using BiBTeX (visual-meta e.g.)
NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible
Conventions and Definitions
definition | explanation |
---|---|
human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
XR fragment | URI Fragment with spatial hints (#pos=0,0,0&t=1,100 e.g.) |
src | (HTML-piggybacked) metadata of a 3D object which instances content |
href | (HTML-piggybacked) metadata of a 3D object which links to content |
query | an URI Fragment-operator which queries object(s) from a scene (#q=cube ) |
visual-meta | visual-meta data appended to text which is indirectly visible/editable in XR. |
requestless metadata | opposite of networked metadata (RDF/HTML request-fanouts easily cause framerate-dropping, hence not used a lot in games). |
FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
introspective | inward sensemaking ("I feel this belongs to that") |
extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
◻ |
ascii representation of an 3D object/mesh |
Core principle
XR Fragments strives to serve humans first, machine(implementations) later, by ensuring hasslefree text-to-thought feedback loops.
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).
"When a car breaks down, the ones without turbosupercharger are easier to fix"
List of URI Fragments
fragment | type | example | info |
---|---|---|---|
#pos |
vector3 | #pos=0.5,0,0 |
positions camera to xyz-coord 0.5,0,0 |
#rot |
vector3 | #rot=0,90,0 |
rotates camera to xyz-coord 0.5,0,0 |
#t |
vector2 | #t=500,1000 |
sets animation-loop range between frame 500 and 1000 |
#...... |
string | #.cubes #cube |
object(s) of interest (fragment to object name or class mapping) |
xyz coordinates are similar to ones found in SVG Media Fragments
List of metadata for 3D nodes
key | type | example (JSON) | info |
---|---|---|---|
name |
string | "name": "cube" |
available in all 3D fileformats & scenes |
class |
string | "class": "cubes" |
available through custom property in 3D fileformats |
href |
string | "href": "b.gltf" |
available through custom property in 3D fileformats |
src |
string | "src": "#q=cube" |
available through custom property in 3D fileformats |
Popular compatible 3D fileformats: .gltf
, .obj
, .fbx
, .usdz
, .json
(THREEjs), COLLADA
and so on.
NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.
Navigating 3D
Here's an ascii representation of a 3D scene-graph which contains 3D objects ◻
and their metadata:
+--------------------------------------------------------+
| |
| index.gltf |
| │ |
| ├── ◻ buttonA |
| │ └ href: #pos=1,0,1&t=100,200 |
| │ |
| └── ◻ buttonB |
| └ href: other.fbx | <-- file-agnostic (can be .gltf .obj etc)
| |
+--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the buttonA
and buttonB
.
In case of buttonA
the end-user will be teleported to another location and time in the current loaded scene, but buttonB
will
replace the current scene with a new one (other.fbx
).
Embedding 3D content
Here's an ascii representation of a 3D scene-graph with 3D objects (◻
) which embeds remote & local 3D objects (◻
) (without) using queries:
+--------------------------------------------------------+ +-------------------------+
| | | |
| index.gltf | | ocean.com/aquarium.fbx |
| │ | | │ |
| ├── ◻ canvas | | └── ◻ fishbowl |
| │ └ src: painting.png | | ├─ ◻ bass |
| │ | | └─ ◻ tuna |
| ├── ◻ aquariumcube | | |
| │ └ src: ://rescue.com/fish.gltf#q=bass%20tuna | +-------------------------+
| │ |
| ├── ◻ bedroom |
| │ └ src: #q=canvas |
| │ |
| └── ◻ livingroom |
| └ src: #q=canvas |
| |
+--------------------------------------------------------+
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects painting.png
onto the (plane) object called canvas
(which is copy-instanced in the bed and livingroom).
Also, after lazy-loading ocean.com/aquarium.gltf
, only the queried objects bass
and tuna
will be instanced inside aquariumcube
.
Resizing will be happen accordingly to its placeholder object (aquariumcube
), see chapter Scaling.
Text in XR (tagging,linking to spatial objects)
We still think and speak in simple text, not in HTML or RDF.
It would be funny when people would shout <h1>FIRE!</h1>
in case of emergency.
Given the myriad of new (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.
Ideally metadata must come later with text, but not obfuscate the text, or in another file.
Humans first, machines (AI) later.
This way:
- XR Fragments allows hasslefree XR text tagging, using BibTeX metadata at the end of content (like visual-meta).
- XR Fragments allows hasslefree textual tagging, spatial tagging, and supra tagging, by mapping 3D/text object (class)names to BibTeX
- inline BibTeX is the minimum required requestless metadata-layer for XR text, RDF/JSON is great but optional (and too verbose for the spec-usecases).
- Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see the core principle).
- anti-pattern: hardcoupling a mandatory obtrusive markuplanguage or framework with an XR browsers (HTML/VRML/Javascript) (see the core principle)
- anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see the core principle)
This allows recursive connections between text itself, as well as 3D objects and vice versa, using BiBTeX-tags :
+--------------------------------------------------+
| My Notes |
| |
| The houses seen here are built in baroque style. |
| |
| @house{houses, <----- XR Fragment triple/tag: tiny & phrase-matching BiBTeX
| url = {#.house} <------------------- XR Fragment URI
| } |
+--------------------------------------------------+
This sets up the following associations in the scene:
- textual tag: text or spatial-occurences named 'houses' is now automatically tagged with 'house'
- spatial tag: spatial object(s) with class:house (#.house) is now automatically tagged with 'house'
- supra-tag: text- or spatial-object named 'house' (spatially) elsewhere, is now automatically tagged with 'house'
Spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted, links can be manipulated by the user.
The simplicity of appending BibTeX (humans first, machines later) is demonstrated by visual-meta in greater detail, and makes it perfect for GUI's to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
Default Data URI mimetype
The src
-values work as expected (respecting mime-types), however:
The XR Fragment specification bumps the traditional default browser-mimetype
text/plain;charset=US-ASCII
to a green eco-friendly:
text/plain;charset=utf-8;bibtex=^@
This indicates that any bibtex metadata starting with @
will automatically get filtered out and:
- automatically detects textual links between textual and spatial objects
It's concept is similar to literate programming. Its implications are that local/remote responses can now:
- (de)multiplex/repair human text and requestless metadata (see the core principle)
- no separated implementation/network-overhead for metadata (see the core principle)
- ensuring high FPS: HTML/RDF historically is too 'requesty' for game studios
- rich send/receive/copy-paste everywhere by default, metadata being retained (see the core principle)
- less network requests, therefore less webservices, therefore less servers, and overall better FPS in XR
This significantly expands expressiveness and portability of human text, by postponing machine-concerns to the end of the human text in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
For all other purposes, regular mimetypes can be used (but are not required by the spec).
To keep XR Fragments a lightweight spec, BiBTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).
Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).
URL and Data URI
+--------------------------------------------------------------+ +------------------------+
| | | author.com/article.txt |
| index.gltf | +------------------------+
| │ | | |
| ├── ◻ article_canvas | | Hello friends. |
| │ └ src: ://author.com/article.txt | | |
| │ | | @friend{friends |
| └── ◻ note_canvas | | ... |
| └ src:`data:welcome human @...` | | } |
| | +------------------------+
| |
+--------------------------------------------------------------+
The enduser will only see welcome human
and Hello friends
rendered spatially.
The beauty is that text (AND visual-meta) in Data URI promotes rich copy-paste.
In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas').
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).
The mapping between 3D objects and text (src-data) is simple:
Example:
+------------------------------------------------------------------------------------+
| |
| index.gltf |
| │ |
| └── ◻ rentalhouse |
| └ class: house |
| └ ◻ note |
| └ src:`data: todo: call owner |
| @house{owner, |
| url = {#.house} |
| }` |
+------------------------------------------------------------------------------------+
Attaching visualmeta as src
metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to name
of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:
- When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.
- When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.
BibTeX as lowest common denominator for tagging/triples
The everything-is-text focus of BiBTex is a great advantage for introspection, and perhaps a necessary bridge towards RDF (extrospective). BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (academic books), perhaps due to its terseness & simplicity:
- frictionless copy/pasting (by humans) of (unobtrusive) content AND metadata
- an introspective 'sketchpad' for metadata, which can (optionally) mature into RDF later
characteristic | Plain Text (with BibTeX) | RDF |
---|---|---|
perspective | introspective | extrospective |
space/scope | local | world |
everything is text (string) | yes | no |
leaves (dictated) text intact | yes | no |
markup language(s) | no (appendix) | ~4 different |
polyglot format | no | yes |
easy to copy/paste content+metadata | yes | depends |
easy to write/repair | yes | depends |
easy to parse | yes (fits on A4 paper) | depends |
infrastructure storage | selfcontained (plain text) | (semi)networked |
tagging | yes | yes |
freeform tagging/notes | yes | depends |
specialized file-type | no | yes |
copy-paste preserves metadata | yes | depends |
emoji | yes | depends |
predicates | free | pre-determined |
implementation/network overhead | no | depends |
used in (physical) books/PDF | yes (visual-meta) | no |
terse categoryless predicates | yes | no |
nested structures | no | yes |
To serve humans first, human 'fuzzy symbolical mind' comes first, and 'categorized typesafe RDF hive mind') later.
XR text (BibTeX) example parser
Here's a naive XR Text (de)multiplexer in javascript (which also supports visual-meta start/end-blocks):
xrtext = {
decode: {
text: (str) => {
let meta={}, text='', last='', data = '';
str.split(/\r?\n/).map( (line) => {
if( !data ) data = last === '' && line.match(/^@/) ? line[0] : ''
if( data ){
if( line === '' ){
xrtext.decode.bibtex(data.substr(1),meta)
data=''
}else data += `${line}\n`
}
text += data ? '' : `${line}\n`
last=line
})
return {text, meta}
},
bibtex: (str,meta) => {
let st = [meta]
str
.split(/\r?\n/ )
.map( s => s.trim() ).join("\n") // be nice
.replace( /}@/, "}\n@" ) // to authors
.replace( /},}/, "},\n}" ) // which struggle
.replace( /^}/, "\n}" ) // with writing single-line BiBTeX
.split( /\n/ ) //
.filter( c => c.trim() ) // actual processing:
.map( (s) => {
if( s.match(/(^}|-end})/) && st.length > 1 ) st.shift()
else if( s.match(/^@/) ) st.unshift( st[0][ s.replace(/(-start|,)/g,'') ] = {} )
else s.replace( /(\w+)\s*=\s*{(.*)}(,)?/g, (m,k,v) => st[0][k] = v )
})
return meta
}
},
encode: (text,meta) => {
if( text === false ){
if (typeof meta === "object") {
return Object.keys(meta).map(k =>
typeof meta[k] == "string"
? ` ${k} = {${meta[k]}},`
: `${ k.match(/[}{]$/) ? k.replace('}','-start}') : `${k},` }\n` +
`${ xrtext.encode( false, meta[k])}\n` +
`${ k.match(/}$/) ? k.replace('}','-end}') : '}' }\n`
.split("\n").filter( s => s.trim() ).join("\n")
)
.join("\n")
}
return meta.toString();
}else return `${text}\n${xrtext.encode(false,meta)}`
}
}
var {meta,text} = xrtext.decode.text(str) // demultiplex text & bibtex
meta['@foo{'] = { "note":"note from the user"} // edit metadata
xrtext.encode(text,meta) // multiplex text & bibtex back together
above can be used as a startingpoint for LLVM's to translate/steelman to any language.
HYPER copy/paste
The previous example, offers something exciting compared to simple copy/paste of 3D objects or text. XR Fragment allows HYPER-copy/paste: time, space and text interlinked. Therefore, the enduser in an XR Fragment-compatible browser can copy/paste/share data in these ways:
- time/space: 3D object (current animation-loop)
- text: TeXt object (including BiBTeX/visual-meta if any)
- interlinked: Collected objects by visual-meta tag
XR Fragment queries
Include, exclude, hide/shows objects using space-separated strings:
#q=cube
#q=cube -ball_inside_cube
#q=* -sky
#q=-.language .english
#q=cube&rot=0,90,0
#q=price:>2 price:<5
It's simple but powerful syntax which allows css-like class/id-selectors with a searchengine prompt-style feeling:
- queries are only executed when embedded in the asset/scene (thru
src
). This is to prevent sharing of scene-tampered URL's. - search words are matched against 3D object names or metadata-key(values)
#
equals#q=*
- words starting with
.
(.language
) indicate class-properties
(For example:
#q=.foo
is a shorthand for#q=class:foo
, which will select objects with custom propertyclass
:foo
. Just a simple#q=cube
will simply select an object namedcube
.
including/excluding
|''operator'' | ''info'' |
|*
| select all objects (only allowed in src
custom property) in the current scene (after the default predefined_view #
was executed)|
|-
| removes/hides object(s) |
|:
| indicates an object-embedded custom property key/value |
|.
| alias for class:
(.foo
equals class:foo
|
|>
<
| compare float or int number|
|/
| reference to root-scene.
Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by src)#q=-/cube
hides object cube
only in the root-scene (not nested cube
objects)
#q=-cube
hides both object cube
in the root-scene AND nested skybox
objects |
» example implementation » example 3D asset » discussion
Query Parser
Here's how to write a query parser:
- create an associative array/object to store query-arguments as objects
- detect object id's & properties
foo:1
andfoo
(reference regex:/^.*:[><=!]?/
) - detect excluders like
-foo
,-foo:1
,-.foo
,-/foo
(reference regex:/^-/
) - detect root selectors like
/foo
(reference regex:/^[-]?\//
) - detect class selectors like
.foo
(reference regex:/^[-]?class$/
) - detect number values like
foo:1
(reference regex:/^[0-9\.]+$/
) - expand aliases like
.foo
intoclass:foo
- for every query token split string on
:
- create an empty array
rules
- then strip key-operator: convert "-foo" into "foo"
- add operator and value to rule-array
- therefore we we set
id
totrue
orfalse
(false=excluder-
) - and we set
root
totrue
orfalse
(true=/
root selector is present) - we convert key '/foo' into 'foo'
- finally we add the key/value to the store (
store.foo = {id:false,root:true}
e.g.)
An example query-parser (which compiles to many languages) can be found here
XR Fragment URI Grammar
reserved = gen-delims / sub-delims
gen-delims = "#" / "&"
sub-delims = "," / "="
Example:
://foo.com/my3d.gltf#pos=1,0,0&prio=-5&t=0,100
Demo | Explanation |
---|---|
pos=1,2,3 |
vector/coordinate argument e.g. |
pos=1,2,3&rot=0,90,0&q=.foo |
combinators |
Security Considerations
Since XR Text contains metadata too, the user should be able to set up tagging-rules, so the copy-paste feature can :
- filter out sensitive data when copy/pasting (XR text with
class:secret
e.g.)
IANA Considerations
This document has no IANA actions.
Acknowledgments
TODO acknowledge.