<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<p>This draft offers a specification for 4D URLs & navigation, to link 3D scenes and text together with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and tagging interactive (text)objects across for (XR) Browsers.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and <ahref="https://visual-meta.info">visual-meta</a>.<br></p>
<li>addressibility and navigation of 3D scenes/objects: <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using BiBTeX (<ahref="https://visual-meta.info">visual-meta</a> e.g.)</li>
</ol>
<blockquote>
<p>NOTE: The chapters in this document are ordered from highlevel to lowlevel (technical) as much as possible</p>
<td>available through custom property in 3D fileformats</td>
</tr>
</tbody>
</table>
<p>Popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREEjs), <code>COLLADA</code> and so on.</p>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will
<strong>replace the current scene</strong> with a new one (<code>other.fbx</code>).</p>
<p>Here’s an ascii representation of a 3D scene-graph with 3D objects (<code>◻</code>) which embeds remote & local 3D objects (<code>◻</code>) (without) using queries:</p>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.<br>
Resizing will be happen accordingly to its placeholder object (<code>aquariumcube</code>), see chapter Scaling.<br></p>
<h1id="text-in-xr-tagging-linking-to-spatial-objects">Text in XR (tagging,linking to spatial objects)</h1>
<p>We still think and speak in simple text, not in HTML or RDF.<br>
It would be funny when people would shout <code><h1>FIRE!</h1></code> in case of emergency.<br>
Given the myriad of new (non-keyboard) XR interfaces, keeping text as is (not obscuring with markup) is preferred.<br>
Ideally metadata must come <strong>later with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>in another</strong> file.<br></p>
<li>XR Fragments allows <bid="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata <strong>at the end of content</strong> (like <ahref="https://visual.meta.info">visual-meta</a>).</li>
<li>XR Fragments allows hasslefree <ahref="#textual-tag">textual tagging</a>, <ahref="#spatial-tag">spatial tagging</a>, and <ahref="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names to BibTeX</li>
<li>inline BibTeX is the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great but optional (and too verbose for the spec-usecases).</li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ahref="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <ahref="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <ahref="#core-principle">the core principle</a>)</li>
</ol>
<p>This allows recursive connections between text itself, as well as 3D objects and vice versa, using <strong>BiBTeX-tags</strong> :</p>
<li><bid="textual-tagging">textual tag</b>: text or spatial-occurences named ‘houses’ is now automatically tagged with ‘house’</li>
<li><bid="spatial-tagging">spatial tag</b>: spatial object(s) with class:house (#.house) is now automatically tagged with ‘house’</li>
<li><bid="supra-tagging">supra-tag</b>: text- or spatial-object named ‘house’ (spatially) elsewhere, is now automatically tagged with ‘house’</li>
</ol>
<p>Spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted, links can be manipulated by the user.</p>
<blockquote>
<p>The simplicity of appending BibTeX (humans first, machines later) is demonstrated by <ahref="https://visual-meta.info">visual-meta</a> in greater detail, and makes it perfect for GUI’s to generate (bib)text later. Humans can still view/edit the metadata manually, by clicking ‘toggle metadata’ on the ‘back’ (contextmenu e.g.) of any XR text, anywhere anytime.</p>
<li>automatically detects textual links between textual and spatial objects</li>
</ul>
<p>It’s concept is similar to literate programming.
Its implications are that local/remote responses can now:</p>
<ul>
<li>(de)multiplex/repair human text and requestless metadata (see <ahref="#core-principle">the core principle</a>)</li>
<li>no separated implementation/network-overhead for metadata (see <ahref="#core-principle">the core principle</a>)</li>
<li>ensuring high FPS: HTML/RDF historically is too ‘requesty’ for game studios</li>
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <ahref="#core-principle">the core principle</a>)</li>
<li>less network requests, therefore less webservices, therefore less servers, and overall better FPS in XR</li>
</ul>
<blockquote>
<p>This significantly expands expressiveness and portability of human text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
</blockquote>
<p>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br>
To keep XR Fragments a lightweight spec, BiBTeX is used for text-spatial object mappings (not a scripting language or RDF e.g.).</p>
<blockquote>
<p>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</p>
<p>Attaching visualmeta as <code>src</code> metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to <code>name</code> of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:</p>
<ol>
<li>When the user surfs to https://…/index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.</li>
</ol>
<h2id="bibtex-as-lowest-common-denominator-for-tagging-triple">BibTeX as lowest common denominator for tagging/triple</h2>
<p>The everything-is-text focus of BiBTex is a great advantage for introspection, and perhaps a necessary bridge towards RDF (extrospective).
BibTeX-appendices (visual-meta e.g.) are already adopted in the physical world (academic books), perhaps due to its terseness & simplicity:</p>
<ol>
<li><bid="frictionless-copy-paste">frictionless copy/pasting</b> (by humans) of (unobtrusive) content AND metadata</li>
<li>an introspective ‘sketchpad’ for metadata, which can (optionally) mature into RDF later</li>
<li>queries are only executed when <b>embedded</b> in the asset/scene (thru <code>src</code>). This is to prevent sharing of scene-tampered URL’s.</li>
<li>search words are matched against 3D object names or metadata-key(values)</li>
<li><code>#</code> equals <code>#q=*</code></li>
<li>words starting with <code>.</code> (<code>.language</code>) indicate class-properties</li>
<p>*(*For example**: <code>#q=.foo</code> is a shorthand for <code>#q=class:foo</code>, which will select objects with custom property <code>class</code>:<code>foo</code>. Just a simple <code>#q=cube</code> will simply select an object named <code>cube</code>.</p>
|<code>*</code> | select all objects (only allowed in <code>src</code> custom property) in the <b>current</b> scene (<b>after</b> the default [[predefined_view|predefined_view]] <code>#</code> was executed)|
|<code>-</code> | removes/hides object(s) |
|<code>:</code> | indicates an object-embedded custom property key/value |
|<code>.</code> | alias for <code>class:</code> (<code>.foo</code> equals <code>class:foo</code> |
|<code>></code><code><</code>| compare float or int number|
|<code>/</code> | reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by [[src]])<br><code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br><code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p>
<p><ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</a>
<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</a>
<p>An example query-parser (which compiles to many languages) can be <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</a></p>