<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) and [visual-meta](https://visual-meta.info).<br>
1. hasslefree tagging across text and spatial objects using [BiBTeX](https://en.wikipedia.org/wiki/BibTeX) ([visual-meta](https://visual-meta.info) e.g.)
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)).
|human | a sentient being who thinks fuzzy, absorbs, and shares thought (by plain text, not markuplanguage) |
|scene | a (local/remote) 3D scene or 3D file (index.gltf e.g.) |
|3D object | an object inside a scene characterized by vertex-, face- and customproperty data. |
|metadata | custom properties of text, 3D Scene or Object(nodes), relevant to machines and a human minority (academics/developers) |
|XR fragment | URI Fragment with spatial hints (`#pos=0,0,0&t=1,100` e.g.) |
|src | (HTML-piggybacked) metadata of a 3D object which instances content |
|href | (HTML-piggybacked) metadata of a 3D object which links to content |
|query | an URI Fragment-operator which queries object(s) from a scene (`#q=cube`) |
|visual-meta | [visual-meta](https://visual.meta.info) data appended to text which is indirectly visible/editable in XR. |
|requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `bass` and `tuna` will be instanced inside `aquariumcube`.<br>
Resizing will be happen accordingly to its placeholder object (`aquariumcube`), see chapter Scaling.<br>
1. XR Fragments allows <bid="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata **at the end of content** (like [visual-meta](https://visual.meta.info)).
1. XR Fragments allows hasslefree <ahref="#textual-tag">textual tagging</a>, <ahref="#spatial-tag">spatial tagging</a>, and <ahref="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names to BibTeX
5. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)).
6. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle))
7. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle))
This allows recursive connections between text itself, as well as 3D objects and vice versa, using **BiBTeX-tags** :
This allows spatial wires to be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
> The simplicity of appending BibTeX (humans first, machines later) is demonstrated by [visual-meta](https://visual-meta.info) in greater detail, and makes it perfect for HUDs/GUI's to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
> This significantly expands expressiveness and portability of human text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
Attaching visualmeta as `src` metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to `name` of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:
1. When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.
2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.
* interlinked: Collected objects by visual-meta tag
# XR Fragment queries
Include, exclude, hide/shows objects using space-separated strings:
*`#q=cube`
*`#q=cube -ball_inside_cube`
*`#q=* -sky`
*`#q=-.language .english`
*`#q=cube&rot=0,90,0`
*`#q=price:>2 price:<5`
It's simple but powerful syntax which allows <b>css</b>-like class/id-selectors with a searchengine prompt-style feeling:
1. queries are only executed when <b>embedded</b> in the asset/scene (thru `src`). This is to prevent sharing of scene-tampered URL's.
2. search words are matched against 3D object names or metadata-key(values)
3.`#` equals `#q=*`
4. words starting with `.` (`.language`) indicate class-properties
> *(*For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`.
* see [an example video here](https://coderofsalvation.github.io/xrfragment.media/queries.mp4)
|`*` | select all objects (only allowed in `src` custom property) in the <b>current</b> scene (<b>after</b> the default [[predefined_view|predefined_view]] `#` was executed)|
|`-` | removes/hides object(s) |
|`:` | indicates an object-embedded custom property key/value |
|`.` | alias for `class:` (`.foo` equals `class:foo` |
|`>` `<`| compare float or int number|
|`/` | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br>`#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)<br>`#q=-cube` hides both object `cube` in the root-scene <b>AND</b> nested `skybox` objects |
[» example implementation](https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js)
[» example 3D asset](https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192)
1. detect excluders like `-foo`,`-foo:1`,`-.foo`,`-/foo` (reference regex: `/^-/` )
1. detect root selectors like `/foo` (reference regex: `/^[-]?\//` )
1. detect class selectors like `.foo` (reference regex: `/^[-]?class$/` )
1. detect number values like `foo:1` (reference regex: `/^[0-9\.]+$/` )
1. expand aliases like `.foo` into `class:foo`
1. for every query token split string on `:`
1. create an empty array `rules`
1. then strip key-operator: convert "-foo" into "foo"
1. add operator and value to rule-array
1. therefore we we set `id` to `true` or `false` (false=excluder `-`)
1. and we set `root` to `true` or `false` (true=`/` root selector is present)
1. we convert key '/foo' into 'foo'
1. finally we add the key/value to the store (`store.foo = {id:false,root:true}` e.g.)
> An example query-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx)