<!-- for annotated version see: https://raw.githubusercontent.com/ietf-tools/rfcxml-templates-and-schemas/main/draft-rfcxml-general-template-annotated-00.xml -->
<p>This draft is a specification for 4D URI’s &<ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation, to enable a spatial web for hypermedia browsers with- or without a network-connection.<br>
The specification uses <ahref="https://www.w3.org/TR/media-frags/">W3C Media Fragments</a> and <ahref="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a> to promote spatial addressibility, sharing, navigation, filtering and databinding objects for (XR) Browsers.<br>
XR Fragments allows us to better use existing metadata inside 3D scene(files), by connecting it to proven technologies like <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a>.<br>
XR Fragments views spatial webs thru the lens of 3D scene URI’s, rather than thru code(frameworks) or protocol-specific browsers (webbrowser e.g.).</p>
<li>addressibility and <ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> navigation of 3D scenes/objects: <ahref="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</a> using src/href spatial metadata</li>
Instead of forcing authors to combine 3D/2D objects programmatically (publishing thru a game-editor e.g.), XR Fragments <strong>integrates all</strong> which allows a universal viewing experience.<br></p>
<p>Fact: our typical browser URL’s are just <strong>a possible implementation</strong> of URI’s (for untapped humancentric potential of URI’s <ahref="https://interpeer.io">see interpeer.io</a>)</p>
<blockquote>
<p>XR Fragments does not look at XR (or the web) thru the lens of HTML or URLs.<br>But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective.</p>
</blockquote>
<p>Below you can see how this translates back into good-old URLs:</p>
<p>?-linked and #-linked navigation are JUST one possible way to implement XR Fragments: the essential goal is to allow a Hypermediatic FeedbackLoop (HFL) between external and internal 4D navigation.</p>
as well (which allows many extra interactions which otherwise need a scripting language). This is known as <strong>hashbus</strong>-only events (see image above).</p>
<blockquote>
<p>Being able to use the same URI Fragment DSL for navigation (<code>href: #foo</code>) as well as interactions (<code>href: xrf://#bar</code>) greatly simplifies implementation, increases HFL, and reduces need for scripting languages.</p>
</blockquote>
<p>This opens up the following benefits for traditional & future webbrowsers:</p>
<li><ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> loading/clicking 3D assets (gltf/fbx e.g.) natively (with or without using HTML).</li>
<li>allowing 3D assets/nodes to publish XR Fragments to themselves/eachother using the <code>xrf://</code> hashbus</li>
<p>XR Fragments itself are <ahref="https://github.com/coderofsalvation/hypermediatic">hypermediatic</a> and HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</p>
<p>An important aspect of HFL is that URI Fragments can be triggered without updating the top-level URI (default href-behaviour) thru their own ‘bus’ (<code>xrf://#.....</code>). This decoupling between navigation and interaction prevents non-standard things like (<code>href</code>:<code>javascript:dosomething()</code>).</p>
<p>Pseudo (non-native) browser-implementations (supporting XR Fragments using HTML+JS e.g.) can use the <code>?</code> search-operator to address outbound content.<br>
In other words, the URL updates to: <code>https://me.com?https://me.com/other.glb</code> when navigating to <code>https://me.com/other.glb</code> from inside a <code>https://me.com</code> WebXR experience e.g.<br>
That way, if the link gets shared, the XR Fragments implementation at <code>https://me.com</code> can load the latter (and still indicates which XR Fragments entrypoint-experience/client was used).</p>
<p>It also allows <strong>sourceportation</strong>, which basically means the enduser can teleport to the original XR Document of an <code>src</code> embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.</p>
<p>Supported popular compatible 3D fileformats: <code>.gltf</code>, <code>.obj</code>, <code>.fbx</code>, <code>.usdz</code>, <code>.json</code> (THREE.js), <code>.dae</code> and so on.</p>
<td>evaluates preset (<code>#foo&bar</code>) defined in 3D Object metadata (<code>#cubes: #foo&bar</code> e.g.) while URL-browserbar reflects <code>#cubes</code>. Only works when metadata-key starts with <code>#</code></td>
<td>will reset (<code>!</code>), show/focus or hide (<code>-</code>) focus object(s) with <code>tag: person</code> or name <code>person</code> by looking up XRWG (<code>*</code>=including children)</td>
<td>sets <ahref="https://www.rfc-editor.org/rfc/rfc6570">URI Template</a> variable <code>foo</code> to the value <code>#t=0</code> from <strong>existing</strong> object metadata (<code>bar</code>:<code>#t=0</code> e.g.), This allows for reactive <ahref="https://www.rfc-editor.org/rfc/rfc6570">URI Template</a> defined in object metadata elsewhere (<code>src</code>:<code>://m.com/cat.mp4#{foo}</code> e.g., to play media using <ahref="https://www.w3.org/TR/media-frags/#valid-uri">media fragment URI</a>). NOTE: metadata-key should not start with <code>#</code></td>
<p>NOTE: below the word ‘play’ applies to 3D animations embedded in the 3D scene(file) <strong>but also</strong> media defined in <code>src</code>-metadata like audio/video-files (mp3/mp4 e.g.)</p>
<p>* = this is extending the <ahref="https://www.w3.org/TR/media-frags/#mf-advanced">W3C media fragments</a> with (missing) playback/viewport-control. Normally <code>#t=0,2</code> implies setting start/stop-values AND starting playback, whereas <code>#s=0&loop</code> allows pausing a video, speeding up/slowing down media, as well as enabling/disabling looping.</p>
<p>The rationale for <code>uv</code> is that the <code>xywh</code> Media Fragment deals with rectangular media, which does not translate well to 3D models (which use triangular polygons, not rectangular) positioned by uv-coordinates. This also explains the absense of a <code>scale</code> or <code>rotate</code> primitive, which is challenged by this, as well as multiple origins (mesh- or texture).</p>
> NOTE: URI Template variables are immutable and respect scope: in other words, the end-user cannot modify `blue` by entering an URL like `#blue=.....` in the browser URL, and `blue` is not accessible by the plane/media-object (however `{play}` would work).
<li>the Y-coordinate of <code>pos</code> identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets).</li>
<li>set the position of the camera accordingly to the vector3 values of <code>#pos</code></li>
<li><code>rot</code> sets the rotation of the camera (only for non-VR/AR headsets)</li>
<li>after scene load: in case the scene (rootnode) contains an <code>#</code> default view with a fragment value: execute non-positional fragments via the hashbus (no top-level URL change)</li>
<li>after scene load: in case the scene (rootnode) contains an <code>#</code> default view with a fragment value: execute positional fragment via the hashbus + update top-level URL</li>
<li>in case of no default <code>#</code> view on the scene (rootnode), default player(rig) position <code>0,0,0</code> is assumed.</li>
<li>in case a <code>href</code> does not mention any <code>pos</code>-coordinate, the current position will be assumed</li>
<p>An XR Fragment-compatible browser viewing this scene, allows the end-user to interact with the <code>buttonA</code> and <code>buttonB</code>.<br>
In case of <code>buttonA</code> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <code>buttonB</code> will <strong>replace the current scene</strong> with a new one, like <code>other.fbx</code>, and assume <code>pos=0,0,0</code>.</p>
<li>IF a <code>#cube</code> matches a custom property-key (of an object) in the 3D file/scene (<code>#cube</code>: <code>#......</code>) <b>THEN</b> execute that predefined_view.</li>
<li>IF scene operators (<code>pos</code>) and/or animation operator (<code>t</code>) are present in the URL then (re)position the camera and/or animation-range accordingly.</li>
<li>IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with <code>#pos=0,0,0</code> (<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]">example</a>)</li>
<li>IF a <code>#cube</code> matches the name (of an object) in the 3D file/scene then draw a line from the enduser(’s heart) to that object (to highlight it).</li>
<li>IF a <code>#cube</code> matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).</li>
It instances content (in objects) in the current scene/asset, and follows similar logic like the previous chapter, except that it does not modify the camera.</p>
<p>Here’s an ascii representation of a 3D scene-graph with 3D objects <code>◻</code> which embeds remote & local 3D objects <code>◻</code> with/out using filters:</p>
<p>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <code>painting.png</code> onto the (plane) object called <code>canvas</code> (which is copy-instanced in the bed and livingroom).<br>
Also, after lazy-loading <code>ocean.com/aquarium.gltf</code>, only the queried objects <code>fishbowl</code> (and <code>bass</code> and <code>tuna</code>) will be instanced inside <code>aquariumcube</code>.<br>
<p>Instead of cherrypicking a rootobject <code>#fishbowl</code> with <code>src</code>, additional filters can be used to include/exclude certain objects. See next chapter on filtering below.</p>
<li>local/remote content is instanced by the <code>src</code> (filter) value (and attaches it to the placeholder mesh containing the <code>src</code> property)</li>
<li>by default all objects are loaded into the instanced src (scene) object (but not shown yet)</li>
<li><b>local</b><code>src</code> values (<code>#...</code> e.g.) starting with a non-negating filter (<code>#cube</code> e.g.) will (deep)reparent that object (with name <code>cube</code>) as the new root of the scene at position 0,0,0</li>
<li><b>local</b><code>src</code> values should respect (negative) filters (<code>#-foo&price=>3</code>)</li>
<li>the instanced scene (from a <code>src</code> value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an ‘empty’-object in blender e.g.). For more info see Chapter Scaling.</li>
<li><b>external</b><code>src</code> values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:</li>
<li><code>src</code> values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see <ahref="#links">broken links</a>)</li>
<li><b>external</b><code>src</code> values should respect the fallback link mechanism (see <ahref="#broken-links">broken links</a></li>
<li>src-values are non-recursive: when linking to an external object (<code>src: foo.fbx#bar</code>), then <code>src</code>-metadata on object <code>bar</code> should be ignored.</li>
<li>an external <code>src</code>-value should always allow a sourceportation icon within 3 meter: teleporting to the origin URI to which the object belongs.</li>
<li>when the enduser clicks an href with <code>#t=1,0,0</code> (play) will be applied to all src mediacontent with a timeline (mp4/mp3 e.g.)</li>
<li>a non-euclidian portal can be rendered for flat 3D objects (using stencil buffer e.g.) in case ofspatial <code>src</code>-values (an object <code>#world3</code> or URL <code>world3.fbx</code> e.g.).</li>
<li><p>clicking an outbound “external”- or “file URI” fully replaces the current scene and assumes <code>pos=0,0,0&rot=0,0,0</code> by default (unless specified)</p></li>
<li><p>navigation should not happen “immediately” when user is more than 5 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)</p></li>
<li><p>URL navigation should always be reflected in the client URL-bar (in case of javascript: see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js">here</a> for an example navigator), and only update the URL-bar after the scene (default fragment <code>#</code>) has been loaded.</p></li>
<li><p>In immersive XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29">here</a> for an example wearable)</p></li>
<li><p>make sure that the “back-button” of the “browser-history” always refers to the previous position (see [<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97">here</a>)</p></li>
<li><p>ignore previous rule in special cases, like clicking an <code>href</code> using camera-portal collision (the back-button could cause a teleport-loop if the previous position is too close)</p></li>
<li><p>href-events should bubble upward the node-tree (from children to ancestors, so that ancestors can also contain an href), however only 1 href can be executed at the same time.</p></li>
<li><p>the end-user navigator back/forward buttons should repeat a back/forward action until a <code>pos=...</code> primitive is found (the stateless xrf:// href-values should not be pushed to the url-history)</p></li>
<p>End-users should always have read/write access to:</p>
<ol>
<li>the current (toplevel) <b>URL</b> (an URLbar etc)</li>
<li>URL-history (a <b>back/forward</b> button e.g.)</li>
<li>Clicking/Touching an <code>href</code> navigates (and updates the URL) to another scene/file (and coordinate e.g. in case the URL contains XR Fragments).</li>
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br></p>
<blockquote>
<p>Rule of thumb: visible placeholder objects act as a ‘3D canvas’ for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).</p>
</blockquote>
<ol>
<li><b>IF</b> an embedded property (<code>src</code> e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):</li>
</ol>
<ul>
<li>calculate the <b>bounding box</b> of the “placeholder” object (maxsize=1.4 e.g.)</li>
<li>hide the “placeholder” object (material e.g.)</li>
<li>instance the <code>src</code> scene as a child of the existing object</li>
<li>calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)</li>
</ul>
<blockquote>
<p>REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)</p>
<li>ELSE multiply the scale-vector of the instanced scene with the scale-vector (a common property of a 3D node) of the <b>placeholder</b> object.</li>
<p>[[» example implementation|<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]</a>]<br></p>
<p>[[» example implementation|<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/pos.js]</a>]<br></p>
<p>[[» example implementation|<ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]</a>]<br></p>
<li>to enable enduser-triggered play, use a [[URI Template]] XR Fragment: (<code>src: bar.mp3#{player}</code> and <code>play: t=0&loop</code> and <code>href: xrf://#player=play</code> e.g.)</li>
<li>when the enduser clicks the <code>href</code>, <code>#t=0&loop</code> (play) will be applied to the <code>src</code> value</li>
<li>see <ahref="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an (outdated) example video here</a> which used a dedicated <code>q=</code> variable (now deprecated and usable directly)</li>
<p>NOTE 1: after an external embedded object has been instanced (<code>src: https://y.com/bar.fbx#room</code> e.g.), filters do not affect them anymore (reason: local tag/name collisions can be mitigated easily, but not in case of remote content).</p>
<p>NOTE 2: depending on the used 3D framework, toggling objects (in)visible should happen by enabling/disableing writing to the colorbuffer (to allow children being still visible while their parents are invisible).</p>
<p>An example filter-parser (which compiles to many languages) can be <ahref="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Filter.hx">found here</a></p>
<p>When predefined views, XRWG fragments and ID fragments (<code>#cube</code> or <code>#mytag</code> e.g.) are triggered by the enduser (via toplevel URL or clicking <code>href</code>):</p>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that ID (objectname)</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that <code>tag</code> value</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) containing that in their <code>src</code> or <code>href</code> value</li>
<p>The obvious approach for this, is to consult the XRWG (<ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), which basically has all these things already collected/organized for you during scene-load.</p>
<p><strong>UX</strong></p>
<olstart="4">
<li>do not update the wires when the enduser moves, leave them as is</li>
<li>offer a control near the back/forward button which allows the user to (turn off) control the correlation-intensity of the XRWG</li>
<p>The XR Fragments does this by collapsing space into a <strong>Word Graph</strong> (the <strong>XRWG</strong><ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>), augmented by Bib(s)Tex.</p>
<li>XR Fragments promotes (de)serializing a scene to a (lowercase) XRWG (<ahref="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</a>)</li>
<li>XR Fragments primes the XRWG, by collecting words from the <code>tag</code> and name-property of 3D objects.</li>
<li>XR Fragments primes the XRWG, by collecting words from <strong>optional</strong> metadata <strong>at the end of content</strong> of text (see default mimetype & Data URI)</li>
<li>Applications don’t have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.</li>
<li>The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)</li>
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking ‘toggle metadata’ on the ‘back’ (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<li>respect multi-line BiBTeX metadata in text because of <ahref="#core-principle">the core principle</a></li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ahref="#core-principle">the core principle</a>).</li>
<li>anti-pattern: hardcoupling an XR Browser with a mandatory <strong>markup/scripting-language</strong> which departs from onubtrusive plain text (HTML/VRML/Javascript) (see <ahref="#core-principle">the core principle</a>)</li>
<li>anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.</li>
<li>words beginning with <code>#</code> (hashtags) will prime the XRWG by adding the hashtag to the XRWG, linking to the current sentence/paragraph/alltext (depending on ‘.’) to the XRWG</li>
<p>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</p>
<p>additional tagging using <ahref="https://github.com/coderofsalvation/hashtagbibs">bibs</a>: to tag spatial object <code>note_canvas</code> with ‘todo’, the enduser can type or speak <code>#note_canvas@todo</code></p>
<p>Environment mapping is crucial for creating realistic reflections and lighting effects on 3D objects.
To apply environment mapping efficiently in a 3D scene, traverse the scene graph and assign each object’s environment map based on the nearest ancestor’s texture map. This ensures that objects inherit the correct environment mapping from their closest parent with a texture, enhancing the visual consistency and realism.</p>
In spirit of Ted Nelson's 'transclusion resolution', there's a soft-mechanism to harden links & minimize broken links in various ways:
1. defining a different transport protocol (https vs ipfs or DAT) in `src` or `href` values can make a difference
2. mirroring files on another protocol using (HTTP) errorcode tags in `src` or `href` properties
3. in case of `src`: nesting a copy of the embedded object in the placeholder object (`embeddedObject`) will not be replaced when the request fails
> due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)
> This would hide all object tagged with `topic`, `courses` or `theme` (including math) so that later only objects tagged with `math` will be visible
This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.
<li><ahref="https://bibtex.eu/fields">BibTex</a> when known bibtex-keys exist with values enclosed in <code>{</code> and <code>},</code></li>
</ul>
<p><strong>ARIA</strong> (<code>aria-description</code>) is the most important to support, as it promotes accessibility and allows scene transcripts. Please start <code>aria-description</code> with a verb to aid transcripts.</p>
<blockquote>
<p>Example: object ‘tryceratops’ with <code>aria-description: is a huge dinosaurus standing on a #mountain</code> generates transcript <code>#tryceratops is a huge dinosaurus standing on a #mountain</code>, where the hashtags are clickable XR Fragments (activating the visible-links in the XR browser).</p>
</blockquote>
<p>Individual nodes can be enriched with such metadata, but most importantly the scene node:</p>
<li>The <code>back</code> command should navigate back to the previous URL (alias for browser-backbutton)</li>
<li>The <code>forward</code> command should navigate back to the next URL (alias for browser-nextbutton)</li>
<li>A destination is a 3D node containing an <code>href</code> with a <code>pos=</code> XR fragment</li>
<li>The <code>go</code> command should list all possible destinations</li>
<li>The <code>go left</code> command should move the camera around 0.3 meters to the left</li>
<li>The <code>go right</code> command should move the camera around 0.3 meters to the right</li>
<li>The <code>go forward</code> command should move the camera 0.3 meters forward (direction of current rotation).</li>
<li>The <code>rotate left</code> command should rotate the camera 0.3 to the left</li>
<li>The <code>rotate left</code> command should rotate the camera 0.3 to the right</li>
<li>The (dynamic) <code>go abc</code> command should navigate to <code>#pos=scene2</code> in case there’s a 3D node with name <code>abc</code> and <code>href</code> value <code>#pos=scene2</code></li>
<li>The <code>look</code> command should give an (contextual) 3D-to-text transcript, by scanning the <code>aria-description</code> values of the current <code>pos=</code> value (including its children)</li>
<li>The <code>do</code> command should list all possible <code>href</code> values which don’t contain an <code>pos=</code> XR Fragment</li>
<li>The (dynamic) <code>do abc</code> command should navigate/execute <code>https://.../...</code> in case a 3D node exist with name <code>abc</code> and <code>href</code> value <code>https://.../...</code></li>
<p>The only dynamic parts are <ahref="https://www.w3.org/TR/media-frags/">W3C Media Fragments</a> and <ahref="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a>.<br>
The use of URI Templates is limited to pre-defined variables and Level0 fragments-expansion only, which makes it quite safe.<br>
In fact, it is much safer than relying on a scripting language (javascript) which can change URN too.</p>
<p><strong>Q:</strong> Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br>
<strong>A:</strong> Because it’s out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru <code>src</code> values)</p>
<p><strong>Q:</strong> Why isn’t there support for scripting, URI Template Fragments are so limited compared to WASM & javascript
<strong>A:</strong> This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br> Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br>In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragment uses <ahref="https://www.w3.org/TR/media-frags/">W3C Media Fragments</a> and <ahref="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</a>, to prevent unhyperifying itself by hardcoupling to a particular markup or scripting language. <br>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br>
Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with XR Fragments or hypermedia.<br>This perhaps belongs more to browser extensions.<br>
Non-HTML Hypermedia browsers should make browser extensions the right place, to ‘extend’ experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).</p>
<td>some resource at something somewhere via someprotocol (<code>http://me.com/foo.glb#foo</code> or <code>e76f8efec8efce98e6f</code><ahref="https://interpeer.io">see interpeer.io</a>)</td>
</tr>
<tr>
<td>URL</td>
<td>something somewhere via someprotocol (<code>http://me.com/foo.glb</code>)</td>
<td>an easy to speak/type/scan tagging SDL (<ahref="https://github.com/coderofsalvation/hashtagbibs">see here</a> which expands to BibTex/JSON/XML</td>