<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and BibTags notation.<br></p>
<li>addressibility and navigation of 3D scenes/objects: <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using <ahref="https://en.wikipedia.org/wiki/BibTeX">BibTags</a> as appendix (see <ahref="https://visual-meta.info">visual-meta</a> e.g.)</li>
<p>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br></p>
<blockquote>
<p>“When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix”</p>
</blockquote>
<p>Let’s always focus on average humans: the ‘fuzzy symbolical mind’ must be served first, before serving the greater <ahref="https://en.wikipedia.org/wiki/Borg">‘categorized typesafe RDF hive mind’</a>).</p>
<td>available through custom property in 3D fileformats</td>
</tr>
</tbody>
</table>
<p>Popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREEjs), <code>COLLADA</code> and so on.</p>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will
<p>Here’s an ascii representation of a 3D scene-graph with 3D objects <code>◻</code> which embeds remote & local 3D objects <code>◻</code> (without) using queries:</p>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.<br>
<li>XR Fragments allows <bid="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata <strong>at the end of content</strong> (like <ahref="https://visual.meta.info">visual-meta</a>).</li>
<li>Bibs/BibTeX-appendices is first-choice <strong>requestless metadata</strong>-layer for XR text, HTML/RDF/JSON is great (but fits better in the application-layer)</li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ahref="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <ahref="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <ahref="#core-principle">the core principle</a>)</li>
<p>This allows instant realtime tagging of objects at various scopes:</p>
<table>
<thead>
<tr>
<th>scope</th>
<th>matching algo</th>
</tr>
</thead>
<tbody>
<tr>
<td><bid="textual-tagging">textual</b></td>
<td>text containing ‘houses’ is now automatically tagged with ‘house’ (incl. plaintext <code>src</code> child nodes)</td>
</tr>
<tr>
<td><bid="spatial-tagging">spatial</b></td>
<td>spatial object(s) with <code>"class":"house"</code> (because of <code>{#.house}</code>) are now automatically tagged with ‘house’ (incl. child nodes)</td>
</tr>
<tr>
<td><bid="supra-tagging">supra</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, named ‘house’, are automatically tagged with ‘house’ (current node to root node)</td>
</tr>
<tr>
<td><bid="omni-tagging">omni</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name ‘house’, are automatically tagged with ‘house’ (too node to all nodes)</td>
</tr>
<tr>
<td><bid="infinite-tagging">infinite</b></td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name ‘house’ or ‘houses’, are automatically tagged with ‘house’ (too node to all nodes)</td>
</tr>
</tbody>
</table>
<p>This empowers the enduser spatial expressiveness (see <ahref="#core-principle">the core principle</a>): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br>
The simplicity of appending BibTeX ‘tags’ (humans first, machines later) is also demonstrated by <ahref="https://visual-meta.info">visual-meta</a> in greater detail.</p>
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking ‘toggle metadata’ on the ‘back’ (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<p>NOTE: infinite matches both ‘house’ and ‘houses’ in text, as well as spatial objects with <code>"class":"house"</code> or name “house”. This multiplexing of id/category is deliberate because of <ahref="#core-principle">the core principle</a>.</p>
<p>This indicates that <ahref="https://github.com/coderofsalvation/tagbibs">bibs</a> and <ahref="https://en.wikipedia.org/wiki/BibTeX">bibtags</a> matching regex <code>^@</code> will automatically get filtered out, in order to:</p>
<p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
<li>When the user surfs to https://…/index.gltf#rentalhouse the XR Fragments-parser points the enduser to the rentalhouse object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), indirectly related metadata can be embedded along.</li>
BibTeX-appendices are already used in the digital AND physical world (academic books, <ahref="https://visual-meta.info">visual-meta</a>), perhaps due to its terseness & simplicity.<br>
<li>The XR Fragments spec does not aim to harden the BiBTeX format</li>
<li>However, respect multi-line BibTex values because of <ahref="#core-principle">the core principle</a></li>
<li>Expand bibs and rulers (like <code>${visual-meta-start}</code>) according to the <ahref="https://github.com/coderofsalvation/tagbibs">tagbibs spec</a></li>
console.log( xrtext.encode(text,tags) ) // multiplex text & bibtex back together
</code></pre>
<pre><code>@{references-start}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
year = {2021},
month = {11},
}
</code></pre>
<p>The above BibTeX-flavor can be imported, however will be rewritten to Dumb BibTeX, to satisfy rule 2 & 5, as well as the <ahref="#core-principle">core principle</a></p>
<pre><code>@visual-meta{
version = {1.1},
generator = {Author 7.6.2 (1064)},
section = {visual-meta-header}
}
@misc{emilyHegland/Edgar&Frod,
author = {Emily Hegland},
title = {Edgar & Frode Hegland, November 2021},
<li>queries are showing/hiding objects <strong>only</strong> when defined as <code>src</code> value (prevents sharing of scene-tampered URL’s).</li>
<li>queries are highlighting objects when defined in the top-Level (browser) URL (bar).</li>
<li>search words like <code>cube</code> and <code>foo</code> in <code>#q=cube foo</code> are matched against 3D object names or custom metadata-key(values)</li>
<li>search words like <code>cube</code> and <code>foo</code> in <code>#q=cube foo</code> are matched against tags (BibTeX) inside plaintext <code>src</code> values like <code>@cube{redcube, ...</code> e.g.</li>
<li>words starting with <code>.</code> like <code>.german</code> match class-metadata of 3D objects like <code>"class":"german"</code></li>
<li>words starting with <code>.</code> like <code>.german</code> match class-metadata of (BibTeX) tags in XR Text objects like <code>@german{KarlHeinz, ...</code> e.g.</li>
<p><strong>For example</strong>: <code>#q=.foo</code> is a shorthand for <code>#q=class:foo</code>, which will select objects with custom property <code>class</code>:<code>foo</code>. Just a simple <code>#q=cube</code> will simply select an object named <code>cube</code>.</p>
<td>select all objects (only useful in <code>src</code> custom property)</td>
</tr>
<tr>
<td><code>-</code></td>
<td>removes/hides object(s)</td>
</tr>
<tr>
<td><code>:</code></td>
<td>indicates an object-embedded custom property key/value</td>
</tr>
<tr>
<td><code>.</code></td>
<td>alias for <code>"class" :".foo"</code> equals <code>class:foo</code></td>
</tr>
<tr>
<td><code>></code><code><</code></td>
<td>compare float or int number</td>
</tr>
<tr>
<td><code>/</code></td>
<td>reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <code>src</code>) (*)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>* = <code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br><code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p>
<p>An example query-parser (which compiles to many languages) can be <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</a></p>