XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <ereftarget="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> and BibTeX notation.<br/>
<li>addressibility and navigation of 3D scenes/objects: <ereftarget="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> + src/href spatial metadata</li>
<li>hasslefree tagging across text and spatial objects using <ereftarget="https://en.wikipedia.org/wiki/BibTeX">BibTeX</eref> 'tags' as appendix (see <ereftarget="https://visual-meta.info">visual-meta</eref> e.g.)</li>
<t>XR Fragments strives to serve (nontechnical/fuzzy) humans first, and machine(implementations) later, by ensuring hasslefree text-vs-thought feedback loops.<br/>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br/>
</t>
<blockquote><t>"When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix"</t>
</blockquote><t>Let's always focus on average humans: the 'fuzzy symbolical mind' must be served first, before serving the greater <ereftarget="https://en.wikipedia.org/wiki/Borg">'categorized typesafe RDF hive mind'</eref>).</t>
<td>available through custom property in 3D fileformats</td>
</tr>
</tbody>
</table><t>Popular compatible 3D fileformats: <tt>.gltf</tt>, <tt>.obj</tt>, <tt>.fbx</tt>, <tt>.usdz</tt>, <tt>.json</tt> (THREEjs), <tt>COLLADA</tt> and so on.</t>
<blockquote><t>NOTE: XR Fragments are file-agnostic, which means that the metadata exist in programmatic 3D scene(nodes) too.</t>
In case of <tt>buttonA</tt> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <tt>buttonB</tt> will
<t>Here's an ascii representation of a 3D scene-graph with 3D objects <tt>◻</tt> which embeds remote & local 3D objects <tt>◻</tt> (without) using queries:</t>
<t>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <tt>painting.png</tt> onto the (plane) object called <tt>canvas</tt> (which is copy-instanced in the bed and livingroom).<br/>
Also, after lazy-loading <tt>ocean.com/aquarium.gltf</tt>, only the queried objects <tt>bass</tt> and <tt>tuna</tt> will be instanced inside <tt>aquariumcube</tt>.<br/>
<li>XR Fragments allows <b id="tagging-text">hasslefree XR text tagging</b>, using BibTeX metadata <strong>at the end of content</strong> (like <ereftarget="https://visual.meta.info">visual-meta</eref>).</li>
<li>XR Fragments allows hasslefree <a href="#textual-tag">textual tagging</a>, <a href="#spatial-tag">spatial tagging</a>, and <a href="#supra-tagging">supra tagging</a>, by mapping 3D/text object (class)names using BibTeX 'tags'</li>
<li>inline BibTeX 'tags' are the minimum required <strong>requestless metadata</strong>-layer for XR text, RDF/JSON is great (but fits better in the application-layer)</li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ereftarget="#core-principle">the core principle</eref>).</li>
<li>anti-pattern: hardcoupling a mandatory <strong>obtrusive markuplanguage</strong> or framework with an XR browsers (HTML/VRML/Javascript) (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>anti-pattern: limiting human introspection, by immediately funneling human thought into typesafe, precise, pre-categorized metadata like RDF (see <ereftarget="#core-principle">the core principle</eref>)</li>
<td>spatial object(s) with <tt>"class":"house"</tt> (because of <tt>{#.house}</tt>) are now automatically tagged with 'house' (incl. child nodes)</td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house', are automatically tagged with 'house' (too node to all nodes)</td>
<td>text- or spatial-object(s) (non-descendant nodes) elsewhere, containing class/name 'house' or 'houses', are automatically tagged with 'house' (too node to all nodes)</td>
</tr>
</tbody>
</table><t>This empowers the enduser spatial expressiveness (see <ereftarget="#core-principle">the core principle</eref>): spatial wires can be rendered, words can be highlighted, spatial objects can be highlighted/moved/scaled, links can be manipulated by the user.<br/>
The simplicity of appending BibTeX 'tags' (humans first, machines later) is also demonstrated by <ereftarget="https://visual-meta.info">visual-meta</eref> in greater detail.</t>
<li>The XR Browser needs to offer a global setting/control to adjust tag-scope with at least range: <tt>[text, spatial, text+spatial, supra, omni, infinite]</tt></li>
<li>The XR Browser should always allow the human to view/edit the BibTex metadata manually, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<blockquote><t>NOTE: infinite matches both 'house' and 'houses' in text, as well as spatial objects with <tt>"class":"house"</tt> or name "house". This multiplexing of id/category is deliberate because of <ereftarget="#core-principle">the core principle</eref>.</t>
<t>This indicates that any bibtex metadata starting with <tt>@</tt> will automatically get filtered out and:</t>
<ulspacing="compact">
<li>automatically detects textual links between textual and spatial objects</li>
</ul>
<t>It's concept is similar to literate programming.
Its implications are that local/remote responses can now:</t>
<ulspacing="compact">
<li>(de)multiplex/repair human text and requestless metadata (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>no separated implementation/network-overhead for metadata (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>ensuring high FPS: HTML/RDF historically is too 'requesty' for game studios</li>
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>less network requests, therefore less webservices, therefore less servers, and overall better FPS in XR</li>
</ul>
<blockquote><t>This significantly expands expressiveness and portability of human text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</t>
</blockquote><t>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br/>
<blockquote><t>Applications are also free to attach any JSON(LD / RDF) to spatial objects using custom properties (but is not interpreted by this spec).</t>
<t>Attaching visualmeta as <tt>src</tt> metadata to the (root) scene-node hints the XR Fragment browser.
3D object names and classes map to <tt>name</tt> of visual-meta glossary-entries.
This allows rich interaction and interlinking between text and 3D objects:</t>
<olspacing="compact">
<li>When the user surfs to https://.../index.gltf#AI the XR Fragments-parser points the enduser to the AI object, and can show contextual info about it.</li>
<li>When (partial) remote content is embedded thru XR Fragment queries (see XR Fragment queries), its related visual-meta can be embedded along.</li>
<sectionanchor="bibtex-as-lowest-common-denominator-for-tagging-triples"><name>BibTeX as lowest common denominator for tagging/triples</name>
<blockquote><t>"When a car breaks down, the ones <strong>without</strong> turbosupercharger are easier to fix"</t>
</blockquote><t>Unlike XML or JSON, the typeless, unnested, everything-is-text nature of BibTeX tags is a great advantage for introspection.<br/>
In a way, the RDF project should welcome it as a missing sensemaking precursor to (eventual) extrospective RDF.<br/>
BibTeX-appendices are already used in the digital AND physical world (academic books, <ereftarget="https://visual-meta.info">visual-meta</eref>), perhaps due to its terseness & simplicity.<br/>
In that sense, it's one step up from the <tt>.ini</tt> fileformat (which has never leaked into the physical book-world):</t>
<li>queries are showing/hiding objects <strong>only</strong> when defined as <tt>src</tt> value (prevents sharing of scene-tampered URL's).</li>
<li>queries are highlighting objects when defined in the top-Level (browser) URL (bar).</li>
<li>search words like <tt>cube</tt> and <tt>foo</tt> in <tt>#q=cube foo</tt> are matched against 3D object names or custom metadata-key(values)</li>
<li>search words like <tt>cube</tt> and <tt>foo</tt> in <tt>#q=cube foo</tt> are matched against tags (BibTeX) inside plaintext <tt>src</tt> values like <tt>@cube{redcube, ...</tt> e.g.</li>
<li>words starting with <tt>.</tt> like <tt>.german</tt> match class-metadata of 3D objects like <tt>"class":"german"</tt></li>
<li>words starting with <tt>.</tt> like <tt>.german</tt> match class-metadata of (BibTeX) tags in XR Text objects like <tt>@german{KarlHeinz, ...</tt> e.g.</li>
<blockquote><t><strong>For example</strong>: <tt>#q=.foo</tt> is a shorthand for <tt>#q=class:foo</tt>, which will select objects with custom property <tt>class</tt>:<tt>foo</tt>. Just a simple <tt>#q=cube</tt> will simply select an object named <tt>cube</tt>.</t>
<blockquote><t>An example query-parser (which compiles to many languages) can be <ereftarget="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</eref></t>