<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like [URI Fragments](https://en.wikipedia.org/wiki/URI_fragment) and BibTeX notation.<br>
> Almost every idea in this document is demonstrated at [https://xrfragment.org](https://xrfragment.org)
1. hasslefree tagging across text and spatial objects using [BibTeX](https://en.wikipedia.org/wiki/BibTeX) 'tags' as appendix (see [visual-meta](https://visual-meta.info) e.g.)
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater ['categorized typesafe RDF hive mind'](https://en.wikipedia.org/wiki/Borg)).
|requestless metadata | opposite of networked metadata (RDF/HTML requests can easily fan out into framerate-dropping, hence not used a lot in games). |
|FPS | frames per second in spatial experiences (games,VR,AR e.g.), should be as high as possible |
|introspective | inward sensemaking ("I feel this belongs to that") |
|extrospective | outward sensemaking ("I'm fairly sure John is a person who lives in oklahoma") |
|`◻` | ascii representation of an 3D object/mesh |
An XR Fragment-compatible browser viewing this scene, lazy-loads and projects `painting.png` onto the (plane) object called `canvas` (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading `ocean.com/aquarium.gltf`, only the queried objects `bass` and `tuna` will be instanced inside `aquariumcube`.<br>
1. XR Fragments allows <bid="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata **at the end of content** (like [visual-meta](https://visual.meta.info)).
1. XR Fragments allows hasslefree <ahref="#textual-tag">textual tagging</a>, <ahref="#spatial-tag">spatial tagging</a>, and <ahref="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX 'tags'
3. inline BibTeX 'tags' are the minimum required **requestless metadata**-layer for XR text, RDF/JSON is great (but fits better in the application-layer)
5. Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see [the core principle](#core-principle)).
6. anti-pattern: hardcoupling a mandatory **obtrusive markuplanguage** or framework with an XR browsers (HTML/VRML/Javascript) (see [the core principle](#core-principle))
7. anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see [the core principle](#core-principle))
| <bid="textual-tagging">textual</b> | text containing 'houses' is now automatically tagged with 'house' (incl. plaintext `src` child nodes) |
| <bid="spatial-tagging">spatial</b> | spatial object(s) with `"class":"house"` (because of `{#.house}`) are now automatically tagged with 'house' (incl. child nodes) |
| <bid="supra-tagging">supra</b> | text- or spatial-object(s) (non-descendant nodes) elsewhere, named 'house', are automatically tagged with 'house' (current node to root node) |
| <bid="omni-tagging">omni</b> | text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house', are automatically tagged with 'house' (too node to all nodes) |
| <bid="infinite-tagging">infinite</b> | text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house' or 'houses', are automatically tagged with 'house' (too node to all nodes) |
This empowers the enduser spatial expressiveness (see [the core principle](#core-principle)): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by [visual-meta](https://visual-meta.info) in greater detail.
1. The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: `[text, spatial, text+spatial, supra, omni, infinite]`
1. The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.
> NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with `"class":"house"` or name "house". This multiplexing of id/category is deliberate because of [the core principle](#core-principle).
> This significantly expands expressiveness and portability of human text, by **postponing machine-concerns to the end of the human text** in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).
Attaching visualmeta as `src` metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to `name` of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:
1. When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.
2. When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.
> "When a car breaks down, the ones **without** turbosupercharger are easier to fix"
Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br>
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br>
BibTeX-appendices are already used in the digital AND physical world (academic books, [visual-meta](https://visual-meta.info)), perhaps due to its terseness & simplicity.<br>
In that sense, it's one step up from the `.ini` fileformat (which has never leaked into the physical book-world):
> above can be used as a startingpoint for LLVM's to translate/steelman to a more formal form/language.
1. The XR Fragments spec (de)serializes does not aim to harden the BiBTeX format
2. Dumb, unnested BiBTeX: always deserialize to a flat lookuptable of tags for speed & simplicity ([core principle](#core-principle))
3. multi-line BibTex values should be supported
4. BibTeX snippets should always start in the beginning of a line (regex: ^@), hence mimetype `text/plain;charset=utf-8;bibtex=^@`
5. Be strict in sending (`encode()`) Dumb Bibtex (start/stop-section becomes a property) (*)
6. Be liberal in receiving, hence a relatively bigger `decode()` (also supports [visual-meta](https://visual-meta.info) start/stop-sections e.g.)
```
@{references-start}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
year = {2021},
month = {11},
}
```
The above BibTeX-flavor can be imported, however will be rewritten to Dumb BibTeX, to satisfy rule 2 & 5, as well as the [core principle](#core-principle)
> **For example**: `#q=.foo` is a shorthand for `#q=class:foo`, which will select objects with custom property `class`:`foo`. Just a simple `#q=cube` will simply select an object named `cube`.
| `*` | select all objects (only useful in `src` custom property) |
| `-` | removes/hides object(s) |
| `:` | indicates an object-embedded custom property key/value |
| `.` | alias for `"class" :".foo"` equals `class:foo` |
| `>``<` | compare float or int number |
| `/` | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by `src`) (*) |
> \* = `#q=-/cube` hides object `cube` only in the root-scene (not nested `cube` objects)<br> `#q=-cube` hides both object `cube` in the root-scene <b>AND</b> nested `skybox` objects |
> An example query-parser (which compiles to many languages) can be [found here](https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx)