<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<p>This draft is a specification for 4D URLs &<ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation, which links together space, time & text together, for hypermedia browsers with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, query-ing and annotating interactive (text)objects across for (XR) Browsers.<br>
XR Fragments allows us to enrich existing dataformats, by recursive use of existing proven technologies like <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> and BibTags notation.<br></p>
<li>addressibility and <ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation of 3D scenes/objects: <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
<li>Interlinking text/& 3D by collapsing space into a Word Graph (XRWG) to show <ahref="#visible-links">visible links</a> (and augmenting text with <ahref="https://github.com/coderofsalvation/tagbibs">bibs</a> / <ahref="https://en.wikipedia.org/wiki/BibTeX">BibTags</a> appendices (see <ahref="https://visual-meta.info">visual-meta</a> e.g.)</li>
<li>unlocking spatial potential of the (originally 2D) hashtag (which jumps to a chapter) for navigating XR documents</li>
This also means that the repair-ability of machine-matters should be human friendly too (not too complex).<br>
XR Fragments tries to seek to connect the world of text (semantical web / RDF), and the world of pixels.<br>
Instead of combining them (in a game-editor e.g.), XR Fragments is opting for a more integrated path <strong>towards</strong> them, by describing how to make browsers <strong>4D URL-ready</strong>:</p>
<p>XR Fragments does not look at XR (or the web) thru the lens of HTML.<br>But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective:</p>
<li><ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> loading 3D assets (gltf/fbx e.g.) natively (with or without using HTML).</li>
<p>XR Fragments itself are <ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> and HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</p>
<p>Supported popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREE.js), <code>.dae</code> and so on.</p>
<p>NOTE: XR Fragments are optional but also file- and protocol-agnostic, which means that programmatic 3D scene(nodes) can also use the mechanism/metadata.</p>
<p>It also allows <strong>sourceportation</strong>, which basically means the enduser can teleport to the original XR Document of an <code>src</code> embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.</p>
<li>the Y-coordinate of <code>pos</code> identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets).</li>
<li>set the position of the camera accordingly to the vector3 values of <code>#pos</code></li>
<li><code>rot</code> sets the rotation of the camera (only for non-VR/AR headsets)</li>
<li><code>t</code> sets the playbackspeed and animation-range of the current scene animation(s) or <code>src</code>-mediacontent (video/audioframes e.g., use <code>t=0,7,7</code> to ‘STOP’ at frame 7 e.g.)</li>
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will <strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>, and assume <code>pos=0,0,0</code>.</p>
<li>IF a <code>#cube</code> matches a custom property-key (of an object) in the 3D file/scene (<code>#cube</code>: <code>#......</code>) <b>THEN</b> execute that predefined_view.</li>
<li>IF scene operators (<code>pos</code>) and/or animation operator (<code>t</code>) are present in the URL then (re)position the camera and/or animation-range accordingly.</li>
<li>IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with <code>#pos=0,0,0</code> (<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]">example</a>)</li>
<li>IF a <code>#cube</code> matches the name (of an object) in the 3D file/scene then draw a line from the enduser(’s heart) to that object (to highlight it).</li>
<li>IF a <code>#cube</code> matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).</li>
<p>Here’s an ascii representation of a 3D scene-graph with 3D objects <code>◻</code> which embeds remote & local 3D objects <code>◻</code> with/out using queries:</p>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>bass</code> and <code>tuna</code> will be instanced inside <code>aquariumcube</code>.<br>
Resizing will be happen accordingly to its placeholder object <code>aquariumcube</code>, see chapter Scaling.<br></p>
<blockquote>
<p>Instead of cherrypicking objects with <code>#bass&tuna</code> thru <code>src</code>, queries can be used to import the whole scene (and filter out certain objects). See next chapter below.</p>
<li>local/remote content is instanced by the <code>src</code> (query) value (and attaches it to the placeholder mesh containing the <code>src</code> property)</li>
<li><b>local</b><code>src</code> values (URL <strong>starting</strong> with <code>#</code>, like <code>#cube&foo</code>) means <strong>only</strong> the mentioned objectnames will be copied to the instanced scene (from the current scene) while preserving their names (to support recursive selectors). <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/src.js">(example code)</a></li>
<li><b>local</b><code>src</code> values indicating a query (<code>#q=</code>), means that all included objects (from the current scene) will be copied to the instanced scene (before applying the query) while preserving their names (to support recursive selectors). <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/src.js">(example code)</a></li>
<li>the instanced scene (from a <code>src</code> value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an ‘empty’-object in blender e.g.). For more info see Chapter Scaling.</li>
<li><b>external</b><code>src</code> values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:</li>
<li><code>src</code> values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see <ahref="#links">broken links</a>)</li>
<li><b>external</b><code>src</code> values should respect the fallback link mechanism (see <ahref="#broken-links">broken links</a></li>
<li>src-values are non-recursive: when linking to an external object (<code>src: foo.fbx#bar</code>), then <code>src</code>-metadata on object <code>bar</code> should be ignored.</li>
<li>clicking on external <code>src</code>-values always allow sourceportation: teleporting to the origin URI to which the object belongs.</li>
<li><p>clicking an outbound “external”- or “file URI” fully replaces the current scene and assumes <code>pos=0,0,0&rot=0,0,0</code> by default (unless specified)</p></li>
<li><p>relocation/reorientation should happen locally for local URI’s (<code>#pos=....</code>)</p></li>
<li><p>navigation should not happen “immediately” when user is more than 2 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)</p></li>
<li><p>URL navigation should always be reflected in the client (in case of javascript: see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js">here</a> for an example navigator).</p></li>
<li><p>In XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29">here</a> for an example wearable)</p></li>
<li><p>in case of navigating to a new [[pos)ition, “first” navigate to the “current position” so that the “back-button” of the “browser-history” always refers to the previous position (see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97">here</a>)</p></li>
<li><p>portal-rendering: a 2:1 ratio texture-material indicates an equirectangular projection</p></li>
</ol>
<p><ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js">» example implementation</a><br>
<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/href.gltf#L192">» example 3D asset</a><br>
<p>End-users should always have read/write access to:</p>
<ol>
<li>the current (toplevel) <b>URL</b> (an URLbar etc)</li>
<li>URL-history (a <b>back/forward</b> button e.g.)</li>
<li>Clicking/Touching an <code>href</code> navigates (and updates the URL) to another scene/file (and coordinate e.g. in case the URL contains XR Fragments).</li>
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br></p>
<blockquote>
<p>Rule of thumb: visible placeholder objects act as a ‘3D canvas’ for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).</p>
</blockquote>
<ol>
<li><b>IF</b> an embedded property (<code>src</code> e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):</li>
</ol>
<ul>
<li>calculate the <b>bounding box</b> of the “placeholder” object (maxsize=1.4 e.g.)</li>
<li>hide the “placeholder” object (material e.g.)</li>
<li>instance the <code>src</code> scene as a child of the existing object</li>
<li>calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)</li>
</ul>
<blockquote>
<p>REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)</p>
<li>ELSE multiply the scale-vector of the instanced scene with the scale-vector (a common property of a 3D node) of the <b>placeholder</b> object.</li>
<td>hide everything with tag <code>language</code>, but show all tag <code>english</code> objects</td>
</tr>
<tr>
<td><code>#q=price:>2 price:<5</code></td>
<td>of all objects with property <code>price</code>, show only objects with value between 2 and 5</td>
</tr>
</tbody>
</table>
<p>It’s simple but powerful syntax which allows filtering the scene using searchengine prompt-style feeling:</p>
<ol>
<li>queries are a way to traverse a scene, and filter objects based on their tag- or property-values.</li>
<li>words like <code>german</code> match tag-metadata of 3D objects like <code>"tag":"german"</code></li>
<li>words like <code>german</code> match (XR Text) objects with (Bib(s)TeX) tags like <code>#KarlHeinz@german</code> or <code>@german{KarlHeinz, ...</code> e.g.</li>
</ol>
<ul>
<li>see <ahref="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an (outdated) example video here</a></li>
<td>indicates an object-embedded custom property key/value</td>
</tr>
<tr>
<td><code>></code><code><</code></td>
<td>compare float or int number</td>
</tr>
<tr>
<td><code>/</code></td>
<td>reference to root-scene.<br>Useful in case of (preventing) showing/hiding objects in nested scenes (instanced by <code>src</code>) (*)</td>
</tr>
</tbody>
</table>
<blockquote>
<p>* = <code>#q=-/cube</code> hides object <code>cube</code> only in the root-scene (not nested <code>cube</code> objects)<br><code>#q=-cube</code> hides both object <code>cube</code> in the root-scene <b>AND</b> nested <code>skybox</code> objects |</p>
</blockquote>
<p><ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/q.js">» example implementation</a>
<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/example/assets/query.gltf#L192">» example 3D asset</a>
<li>create an associative array/object to store query-arguments as objects</li>
<li>detect object id’s & properties <code>foo:1</code> and <code>foo</code> (reference regex: <code>/^.*:[><=!]?/</code> )</li>
<li>detect excluders like <code>-foo</code>,<code>-foo:1</code>,<code>-.foo</code>,<code>-/foo</code> (reference regex: <code>/^-/</code> )</li>
<li>detect root selectors like <code>/foo</code> (reference regex: <code>/^[-]?\//</code> )</li>
<li>detect number values like <code>foo:1</code> (reference regex: <code>/^[0-9\.]+$/</code> )</li>
<li>for every query token split string on <code>:</code></li>
<li>create an empty array <code>rules</code></li>
<li>then strip key-operator: convert “-foo” into “foo”</li>
<li>add operator and value to rule-array</li>
<li>therefore we we set <code>id</code> to <code>true</code> or <code>false</code> (false=excluder <code>-</code>)</li>
<li>and we set <code>root</code> to <code>true</code> or <code>false</code> (true=<code>/</code> root selector is present)</li>
<li>we convert key ‘/foo’ into ‘foo’</li>
<li>finally we add the key/value to the store like <code>store.foo = {id:false,root:true}</code> e.g.</li>
</ol>
<blockquote>
<p>An example query-parser (which compiles to many languages) can be <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Query.hx">found here</a></p>
<p>When predefined views, XRWG fragments and ID fragments (<code>#cube</code> or <code>#mytag</code> e.g.) are triggered by the enduser (via toplevel URL or clicking <code>href</code>):</p>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that ID (objectname)</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that <code>tag</code> value</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) containing that in their <code>src</code> or <code>href</code> value</li>
<p>The obvious approach for this, is to consult the XRWG (<ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), which basically has all these things already collected/organized for you during scene-load.</p>
<p><strong>UX</strong></p>
<olstart="4">
<li>do not update the wires when the enduser moves, leave them as is</li>
<li>offer a control near the back/forward button which allows the user to (turn off) control the correlation-intensity of the XRWG</li>
<p>The XR Fragments does this by collapsing space into a <strong>Word Graph</strong> (the <strong>XRWG</strong><ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), augmented by Bib(s)Tex.</p>
<p>Instead of just throwing together all kinds media types into one experience (games), what about their tagged/semantical relationships?<br>
Perhaps the following question is related: why is HTML adopted less in games outside the browser?
Through the lens of constructive lazy game-developers, ideally metadata must come <strong>with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>spawning another request</strong> to fetch it.<br>
XR Fragments does this by detecting Bib(s)Tex, without introducing a new language or fileformat<br></p>
<blockquote>
<p>Why Bib(s)Tex? Because its seems to be the lowest common denominator for an human-curated XRWG (extendable by speech/scanner/writing/typing e.g, see <ahref="https://github.com/coderofsalvation/hashtagbibs#bibs--bibtex-combo-lowest-common-denominator-for-linking-data">further motivation here</a>)</p>
<li>XR Fragments promotes (de)serializing a scene to the XRWG (<ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>)</li>
<li>XR Fragments primes the XRWG, by collecting words from the <code>tag</code> and name-property of 3D objects.</li>
<li>XR Fragments primes the XRWG, by collecting words from <strong>optional</strong> metadata <strong>at the end of content</strong> of text (see default mimetype & Data URI)</li>
<li><ahref="https://github.com/coderofsalvation/hashtagbibs">Bib’s</a> and BibTex are first tag citizens for priming the XRWG with words (from XR text)</li>
<li>Like Bibs, XR Fragments generalizes the BibTex author/title-semantics (<code>author{title}</code>) into <strong>this</strong> points to <strong>that</strong> (<code>this{that}</code>)</li>
<li>The XRWG should be recalculated when textvalues (in <code>src</code>) change</li>
<li>HTML/RDF/JSON is still great, but is beyond the XRWG-scope (they fit better in the application-layer)</li>
<li>Applications don’t have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.</li>
<li>The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)</li>
<li>Tags are the scope for now (supporting <ahref="https://github.com/WICG/scroll-to-text-fragment">https://github.com/WICG/scroll-to-text-fragment</a> will be considered)</li>
<p>both <code>#john@baroque</code>-bib and BibTex <code>@baroque{john}</code> result in the same XRWG, however on top of that 2 tages (<code>house</code> and <code>todo</code>) are now associated with text/objectname/tag ‘baroque’.</p>
<p><ahref="https://github.com/coderofsalvation/hashtagbibs">hashtagbibs</a> potentially allow the enduser to annotate text/objects by <strong>speaking/typing/scanning associations</strong>, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: <code>https://y.io/z.fbx#@baroque@todo</code> e.g.</p>
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking ‘toggle metadata’ on the ‘back’ (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<li>respect multi-line BiBTeX metadata in text because of <ahref="#core-principle">the core principle</a></li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ahref="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling an XR Browser with a mandatory <strong>markup/scripting-language</strong> which departs from onubtrusive plain text (HTML/VRML/Javascript) (see <ahref="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.</li>
<p>The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by <ahref="https://visual-meta.info">visual-meta</a> in greater detail.</p>
<li>lines beginning with <code>@</code> will not be rendered verbatim by default (<ahref="https://github.com/coderofsalvation/hashtagbibs#hashtagbib-mimetypes">read more</a>)</li>
<li>the XRWG should expand bibs to BibTex occurring in text (<code>#contactjohn@todo@important</code> e.g.)</li>
<p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
<p>additional tagging using <ahref="https://github.com/coderofsalvation/hashtagbibs">bibs</a>: to tag spatial object <code>note_canvas</code> with ‘todo’, the enduser can type or speak <code>#note_canvas@todo</code></p>
<p>To prime the XRWG with text from plain text <code>src</code>-values, here’s an example XR Text (de)multiplexer in javascript (which supports inline bibs & bibtex):</p>
<p>when an XR browser updates the human text, a quick scan for nonmatching tags (<code>@book{nonmatchingbook</code> e.g.) should be performed and prompt the enduser for deleting them.</p>
<p>In spirit of Ted Nelson’s ‘transclusion resolution’, there’s a soft-mechanism to harden links & minimize broken links in various ways:</p>
<li>in case of <code>src</code>: nesting a copy of the embedded object in the placeholder object (<code>embeddedObject</code>) will not be replaced when the request fails</li>
<p>due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)</p>
<p>This would hide all object tagged with <code>topic</code>, <code>courses</code> or <code>theme</code> (including math) so that later only objects tagged with <code>math</code> will be visible</p>
</blockquote>
<p>This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.</p>
<p><strong>Q:</strong> Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br>
<strong>A:</strong> Because it’s out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru <code>src</code> values)</p>
<p><strong>Q:</strong> Why isn’t there support for scripting, while we have things like WASM
<strong>A:</strong> This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragments should never unhyperify itself by hardcoupling to a particular markup or scripting language. <ahref="https://xrfragment.org/doc/RFC_XR_Macros.html">XR Macro’s</a> are an example of something which is probably smarter and safer for hypermedia browsers to implement, instead of going full-in with a turing-complete scripting language (and suffer the security consequences later).<br>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br>
Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with hypermedia.<br>This belongs to browser extensions.<br>
Non-HTML Hypermedia browsers should make browser extensions the right place, to ‘extend’ experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).</p>
<td>an easy to speak/type/scan tagging SDL (<ahref="https://github.com/coderofsalvation/hashtagbibs">see here</a> which expands to BibTex/JSON/XML</td>