<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<p>This draft is a specification for 4D URLs &<ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation, which links together space, time & text together, for hypermedia browsers with- or without a network-connection.<br>
The specification promotes spatial addressibility, sharing, navigation, filtering and databinding objects for (XR) Browsers.<br>
XR Fragments allows us to better use existing metadata inside 3D scene(files), by connecting it to proven technologies like <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a>.</p>
<li>addressibility and <ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation of 3D scenes/objects: <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> + src/href spatial metadata</li>
<p><strong>XR Fragments allows controlling of metadata in 3D scene(files) using URLs</strong></p>
<p>XR Fragments tries to seek to connect the world of text (semantical web / RDF), and the world of pixels.<br>
Instead of combining them (in a game-editor e.g.), XR Fragments <strong>integrates all</strong>, by collecting metadata into an XRWG and control it via URL:</p>
<p>XR Fragments does not look at XR (or the web) thru the lens of HTML.<br>But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective:</p>
<li><ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> loading 3D assets (gltf/fbx e.g.) natively (with or without using HTML).</li>
<p>XR Fragments itself are <ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> and HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</p>
<p>Supported popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREE.js), <code>.dae</code> and so on.</p>
<p>NOTE: XR Fragments are optional but also file- and protocol-agnostic, which means that programmatic 3D scene(nodes) can also use the mechanism/metadata.</p>
<p>These are automatic fragment-to-metadata mappings, which only trigger if the 3D scene metadata matches a specific identifier (<code>aliasname</code> e.g.)</p>
<p>It also allows <strong>sourceportation</strong>, which basically means the enduser can teleport to the original XR Document of an <code>src</code> embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.</p>
<li>the Y-coordinate of <code>pos</code> identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets).</li>
<li>set the position of the camera accordingly to the vector3 values of <code>#pos</code></li>
<li><code>rot</code> sets the rotation of the camera (only for non-VR/AR headsets)</li>
<li><code>t</code> sets the playbackspeed and animation-range of the current scene animation(s) or <code>src</code>-mediacontent (video/audioframes e.g., use <code>t=0,7,7</code> to ‘STOP’ at frame 7 e.g.)</li>
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will <strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>, and assume <code>pos=0,0,0</code>.</p>
<li>IF a <code>#cube</code> matches a custom property-key (of an object) in the 3D file/scene (<code>#cube</code>: <code>#......</code>) <b>THEN</b> execute that predefined_view.</li>
<li>IF scene operators (<code>pos</code>) and/or animation operator (<code>t</code>) are present in the URL then (re)position the camera and/or animation-range accordingly.</li>
<li>IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with <code>#pos=0,0,0</code> (<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]">example</a>)</li>
<li>IF a <code>#cube</code> matches the name (of an object) in the 3D file/scene then draw a line from the enduser(’s heart) to that object (to highlight it).</li>
<li>IF a <code>#cube</code> matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).</li>
<p>Here’s an ascii representation of a 3D scene-graph with 3D objects <code>◻</code> which embeds remote & local 3D objects <code>◻</code> with/out using filters:</p>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>fishbowl</code> (and <code>bass</code> and <code>tuna</code>) will be instanced inside <code>aquariumcube</code>.<br>
<p>Instead of cherrypicking a rootobject <code>#fishbowl</code> with <code>src</code>, additional filters can be used to include/exclude certain objects. See next chapter on filtering below.</p>
<li>local/remote content is instanced by the <code>src</code> (filter) value (and attaches it to the placeholder mesh containing the <code>src</code> property)</li>
<li>by default all objects are loaded into the instanced src (scene) object (but not shown yet)</li>
<li><b>local</b><code>src</code> values (<code>#...</code> e.g.) starting with a non-negating filter (<code>#cube</code> e.g.) will (deep)reparent that object (with name <code>cube</code>) as the new root of the scene at position 0,0,0</li>
<li><b>local</b><code>src</code> values should respect (negative) filters (<code>#-foo&price=>3</code>)</li>
<li>the instanced scene (from a <code>src</code> value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an ‘empty’-object in blender e.g.). For more info see Chapter Scaling.</li>
<li><b>external</b><code>src</code> values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:</li>
<li><code>src</code> values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see <ahref="#links">broken links</a>)</li>
<li><b>external</b><code>src</code> values should respect the fallback link mechanism (see <ahref="#broken-links">broken links</a></li>
<li>src-values are non-recursive: when linking to an external object (<code>src: foo.fbx#bar</code>), then <code>src</code>-metadata on object <code>bar</code> should be ignored.</li>
<li>an external <code>src</code>-value should always allow a sourceportation icon within 3 meter: teleporting to the origin URI to which the object belongs.</li>
<li>when the enduser clicks an href with <code>#t=1,0,0</code> (play) will be applied to all src mediacontent with a timeline (mp4/mp3 e.g.)</li>
<li>a non-euclidian portal can be rendered for flat 3D objects (using stencil buffer e.g.) in case ofspatial <code>src</code>-values (an object <code>#world3</code> or URL <code>world3.fbx</code> e.g.).</li>
<li><p>clicking an outbound “external”- or “file URI” fully replaces the current scene and assumes <code>pos=0,0,0&rot=0,0,0</code> by default (unless specified)</p></li>
<li><p>relocation/reorientation should happen locally for local URI’s (<code>#pos=....</code>)</p></li>
<li><p>navigation should not happen “immediately” when user is more than 2 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)</p></li>
<li><p>URL navigation should always be reflected in the client (in case of javascript: see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js">here</a> for an example navigator).</p></li>
<li><p>In XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29">here</a> for an example wearable)</p></li>
<li><p>in case of navigating to a new [[pos)ition, “first” navigate to the “current position” so that the “back-button” of the “browser-history” always refers to the previous position (see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97">here</a>)</p></li>
<li><p>ignore previous rule in special cases, like clicking an <code>href</code> using camera-portal collision (the back-button would cause a teleport-loop)</p></li>
<p>End-users should always have read/write access to:</p>
<ol>
<li>the current (toplevel) <b>URL</b> (an URLbar etc)</li>
<li>URL-history (a <b>back/forward</b> button e.g.)</li>
<li>Clicking/Touching an <code>href</code> navigates (and updates the URL) to another scene/file (and coordinate e.g. in case the URL contains XR Fragments).</li>
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br></p>
<blockquote>
<p>Rule of thumb: visible placeholder objects act as a ‘3D canvas’ for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).</p>
</blockquote>
<ol>
<li><b>IF</b> an embedded property (<code>src</code> e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):</li>
</ol>
<ul>
<li>calculate the <b>bounding box</b> of the “placeholder” object (maxsize=1.4 e.g.)</li>
<li>hide the “placeholder” object (material e.g.)</li>
<li>instance the <code>src</code> scene as a child of the existing object</li>
<li>calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)</li>
</ul>
<blockquote>
<p>REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)</p>
<li>ELSE multiply the scale-vector of the instanced scene with the scale-vector (a common property of a 3D node) of the <b>placeholder</b> object.</li>
<li>playposition is reset to framestart, when framestart or framestop is greater than 0 |</li>
</ul>
<p>| Example Value | Explanation |
|-|-|
| <code>1,1,100</code> | play loop between frame 1 and 100 |
| <code>1,1,0</code> | play once from frame 1 (oneshot) |
| <code>1,0,0</code> | play (previously set looprange if any) |
| <code>0,0,0</code> | pause |
| <code>1,1,1</code> | play and auto-loop between begin and end of duration |
| <code>-1,0,0</code> | reverse playback speed |
| <code>2.3,0,0</code> | set (forward) playback speed to 2.3 (no restart) |
| <code>-2.3,0,0</code> | set (reverse) playback speed to -2.3 ( no restart)|
| <code>-2.3,100,0</code> | set (reverse) playback speed to -2.3 restarting from frame 100 |</p>
<p>[[» example implementation|<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]</a>]<br>
<li>see <ahref="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an (outdated) example video here</a> which used a dedicated <code>q=</code> variable (now deprecated and usable directly)</li>
<p>NOTE 1: after an external embedded object has been instanced (<code>src: https://y.com/bar.fbx#room</code> e.g.), filters do not affect them anymore (reason: local tag/name collisions can be mitigated easily, but not in case of remote content).</p>
<p>NOTE 2: depending on the used 3D framework, toggling objects (in)visible should happen by enabling/disableing writing to the colorbuffer (to allow children being still visible while their parents are invisible).</p>
<p>An example filter-parser (which compiles to many languages) can be <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Filter.hx">found here</a></p>
<p>When predefined views, XRWG fragments and ID fragments (<code>#cube</code> or <code>#mytag</code> e.g.) are triggered by the enduser (via toplevel URL or clicking <code>href</code>):</p>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that ID (objectname)</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that <code>tag</code> value</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) containing that in their <code>src</code> or <code>href</code> value</li>
<p>The obvious approach for this, is to consult the XRWG (<ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), which basically has all these things already collected/organized for you during scene-load.</p>
<p><strong>UX</strong></p>
<olstart="4">
<li>do not update the wires when the enduser moves, leave them as is</li>
<li>offer a control near the back/forward button which allows the user to (turn off) control the correlation-intensity of the XRWG</li>
<p>The XR Fragments does this by collapsing space into a <strong>Word Graph</strong> (the <strong>XRWG</strong><ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), augmented by Bib(s)Tex.</p>
<p>Instead of just throwing together all kinds media types into one experience (games), what about their tagged/semantical relationships?<br>
Perhaps the following question is related: why is HTML adopted less in games outside the browser?
Through the lens of constructive lazy game-developers, ideally metadata must come <strong>with</strong> text, but not <strong>obfuscate</strong> the text, or <strong>spawning another request</strong> to fetch it.<br>
XR Fragments does this by detecting Bib(s)Tex, without introducing a new language or fileformat<br></p>
<blockquote>
<p>Why Bib(s)Tex? Because its seems to be the lowest common denominator for an human-curated XRWG (extendable by speech/scanner/writing/typing e.g, see <ahref="https://github.com/coderofsalvation/hashtagbibs#bibs--bibtex-combo-lowest-common-denominator-for-linking-data">further motivation here</a>)</p>
<li>XR Fragments promotes (de)serializing a scene to the XRWG (<ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>)</li>
<li>XR Fragments primes the XRWG, by collecting words from the <code>tag</code> and name-property of 3D objects.</li>
<li>XR Fragments primes the XRWG, by collecting words from <strong>optional</strong> metadata <strong>at the end of content</strong> of text (see default mimetype & Data URI)</li>
<li><ahref="https://github.com/coderofsalvation/hashtagbibs">Bib’s</a> and BibTex are first tag citizens for priming the XRWG with words (from XR text)</li>
<li>Like Bibs, XR Fragments generalizes the BibTex author/title-semantics (<code>author{title}</code>) into <strong>this</strong> points to <strong>that</strong> (<code>this{that}</code>)</li>
<li>The XRWG should be recalculated when textvalues (in <code>src</code>) change</li>
<li>HTML/RDF/JSON is still great, but is beyond the XRWG-scope (they fit better in the application-layer)</li>
<li>Applications don’t have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.</li>
<li>The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)</li>
<li>Tags are the scope for now (supporting <ahref="https://github.com/WICG/scroll-to-text-fragment">https://github.com/WICG/scroll-to-text-fragment</a> will be considered)</li>
<p>both <code>#john@baroque</code>-bib and BibTex <code>@baroque{john}</code> result in the same XRWG, however on top of that 2 tages (<code>house</code> and <code>todo</code>) are now associated with text/objectname/tag ‘baroque’.</p>
<p><ahref="https://github.com/coderofsalvation/hashtagbibs">hashtagbibs</a> potentially allow the enduser to annotate text/objects by <strong>speaking/typing/scanning associations</strong>, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: <code>https://y.io/z.fbx#@baroque@todo</code> e.g.</p>
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking ‘toggle metadata’ on the ‘back’ (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<li>respect multi-line BiBTeX metadata in text because of <ahref="#core-principle">the core principle</a></li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ahref="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling an XR Browser with a mandatory <strong>markup/scripting-language</strong> which departs from onubtrusive plain text (HTML/VRML/Javascript) (see <ahref="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.</li>
<p>The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by <ahref="https://visual-meta.info">visual-meta</a> in greater detail.</p>
<li>lines beginning with <code>@</code> will not be rendered verbatim by default (<ahref="https://github.com/coderofsalvation/hashtagbibs#hashtagbib-mimetypes">read more</a>)</li>
<li>the XRWG should expand bibs to BibTex occurring in text (<code>#contactjohn@todo@important</code> e.g.)</li>
<p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
<p>additional tagging using <ahref="https://github.com/coderofsalvation/hashtagbibs">bibs</a>: to tag spatial object <code>note_canvas</code> with ‘todo’, the enduser can type or speak <code>#note_canvas@todo</code></p>
<p>To prime the XRWG with text from plain text <code>src</code>-values, here’s an example XR Text (de)multiplexer in javascript (which supports inline bibs & bibtex):</p>
<p>when an XR browser updates the human text, a quick scan for nonmatching tags (<code>@book{nonmatchingbook</code> e.g.) should be performed and prompt the enduser for deleting them.</p>
<p>In spirit of Ted Nelson’s ‘transclusion resolution’, there’s a soft-mechanism to harden links & minimize broken links in various ways:</p>
<li>in case of <code>src</code>: nesting a copy of the embedded object in the placeholder object (<code>embeddedObject</code>) will not be replaced when the request fails</li>
<p>due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)</p>
<p>This would hide all object tagged with <code>topic</code>, <code>courses</code> or <code>theme</code> (including math) so that later only objects tagged with <code>math</code> will be visible</p>
</blockquote>
<p>This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.</p>
<p><strong>Q:</strong> Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br>
<strong>A:</strong> Because it’s out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru <code>src</code> values)</p>
<p><strong>Q:</strong> Why isn’t there support for scripting, while we have things like WASM
<strong>A:</strong> This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragments should never unhyperify itself by hardcoupling to a particular markup or scripting language. <ahref="https://xrfragment.org/doc/RFC_XR_Macros.html">XR Macro’s</a> are an example of something which is probably smarter and safer for hypermedia browsers to implement, instead of going full-in with a turing-complete scripting language (and suffer the security consequences later).<br>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br>
Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with hypermedia.<br>This belongs to browser extensions.<br>
Non-HTML Hypermedia browsers should make browser extensions the right place, to ‘extend’ experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).</p>
<td>an easy to speak/type/scan tagging SDL (<ahref="https://github.com/coderofsalvation/hashtagbibs">see here</a> which expands to BibTex/JSON/XML</td>