<t>This draft is a specification for 4D URI's &<ereftarget="https://github.com/coderofsalvation/hypermediatic">hypermediatic</eref> navigation, to enable a spatial web for hypermedia browsers with- or without a network-connection.<br/>
The specification uses <ereftarget="https://www.w3.org/TR/media-frags/">W3C Media Fragments</eref> and <ereftarget="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</eref> to promote spatial addressibility, sharing, navigation, filtering and databinding objects for (XR) Browsers.<br/>
XR Fragments allows us to better use existing metadata inside 3D scene(files), by connecting it to proven technologies like <ereftarget="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref>.<br/>
XR Fragments views spatial webs thru the lens of 3D scene URI's, rather than thru code(frameworks) or protocol-specific browsers (webbrowser e.g.).</t>
<li>addressibility and <ereftarget="https://github.com/coderofsalvation/hypermediatic">hypermediatic</eref> navigation of 3D scenes/objects: <ereftarget="https://en.wikipedia.org/wiki/URI_fragment">URI Fragments</eref> using src/href spatial metadata</li>
<li>Interlinking text & spatial objects by collapsing space into a Word Graph (XRWG) to show <ereftarget="#visible-links">visible links</eref></li>
Instead of forcing authors to combine 3D/2D objects programmatically (publishing thru a game-editor e.g.), XR Fragments <strong>integrates all</strong> which allows a universal viewing experience.<br/>
<t>Fact: our typical browser URL's are just <strong>a possible implementation</strong> of URI's (for untapped humancentric potential of URI's <ereftarget="https://interpeer.io">see interpeer.io</eref>)</t>
<blockquote><t>XR Fragments does not look at XR (or the web) thru the lens of HTML or URLs.<br/>
But approaches things from a higherlevel feedbackloop/hypermedia browser-perspective.</t>
</blockquote><t>Below you can see how this translates back into good-old URLs:</t>
<blockquote><t>?-linked and #-linked navigation are JUST one possible way to implement XR Fragments: the essential goal is to allow a Hypermediatic FeedbackLoop (HFL) between external and internal 4D navigation.</t>
as well (which allows many extra interactions which otherwise need a scripting language). This is known as <strong>hashbus</strong>-only events (see image above).</t>
<blockquote><t>Being able to use the same URI Fragment DSL for navigation (<tt>href: #foo</tt>) as well as interactions (<tt>href: xrf://#bar</tt>) greatly simplifies implementation, increases HFL, and reduces need for scripting languages.</t>
</blockquote><t>This opens up the following benefits for traditional & future webbrowsers:</t>
<li><ereftarget="https://github.com/coderofsalvation/hypermediatic">hypermediatic</eref> loading/clicking 3D assets (gltf/fbx e.g.) natively (with or without using HTML).</li>
<li>allowing 3D assets/nodes to publish XR Fragments to themselves/eachother using the <tt>xrf://</tt> hashbus</li>
<t>XR Fragments itself are <ereftarget="https://github.com/coderofsalvation/hypermediatic">hypermediatic</eref> and HTML-agnostic, though pseudo-XR Fragment browsers <strong>can</strong> be implemented on top of HTML/Javascript.</t>
</table><blockquote><t>An important aspect of HFL is that URI Fragments can be triggered without updating the top-level URI (default href-behaviour) thru their own 'bus' (<tt>xrf://#.....</tt>). This decoupling between navigation and interaction prevents non-standard things like (<tt>href</tt>:<tt>javascript:dosomething()</tt>).</t>
</blockquote><t>Pseudo (non-native) browser-implementations (supporting XR Fragments using HTML+JS e.g.) can use the <tt>?</tt> search-operator to address outbound content.<br/>
In other words, the URL updates to: <tt>https://me.com?https://me.com/other.glb</tt> when navigating to <tt>https://me.com/other.glb</tt> from inside a <tt>https://me.com</tt> WebXR experience e.g.<br/>
That way, if the link gets shared, the XR Fragments implementation at <tt>https://me.com</tt> can load the latter (and still indicates which XR Fragments entrypoint-experience/client was used).</t>
<blockquote><t>It also allows <strong>sourceportation</strong>, which basically means the enduser can teleport to the original XR Document of an <tt>src</tt> embedded object, and see a visible connection to the particular embedded object. Basically an embedded link becoming an outbound link by activating it.</t>
</table><blockquote><t>Supported popular compatible 3D fileformats: <tt>.gltf</tt>, <tt>.obj</tt>, <tt>.fbx</tt>, <tt>.usdz</tt>, <tt>.json</tt> (THREE.js), <tt>.dae</tt> and so on.</t>
<td>evaluates preset (<tt>#foo&bar</tt>) defined in 3D Object metadata (<tt>#cubes: #foo&bar</tt> e.g.) while URL-browserbar reflects <tt>#cubes</tt>. Only works when metadata-key starts with <tt>#</tt></td>
<td>will reset (<tt>!</tt>), show/focus or hide (<tt>-</tt>) focus object(s) with <tt>tag: person</tt> or name <tt>person</tt> by looking up XRWG (<tt>*</tt>=including children)</td>
<td>sets <ereftarget="https://www.rfc-editor.org/rfc/rfc6570">URI Template</eref> variable <tt>foo</tt> to the value <tt>#t=0</tt> from <strong>existing</strong> object metadata (<tt>bar</tt>:<tt>#t=0</tt> e.g.), This allows for reactive <ereftarget="https://www.rfc-editor.org/rfc/rfc6570">URI Template</eref> defined in object metadata elsewhere (<tt>src</tt>:<tt>://m.com/cat.mp4#{foo}</tt> e.g., to play media using <ereftarget="https://www.w3.org/TR/media-frags/#valid-uri">media fragment URI</eref>). NOTE: metadata-key should not start with <tt>#</tt></td>
<blockquote><t>NOTE: below the word 'play' applies to 3D animations embedded in the 3D scene(file) <strong>but also</strong> media defined in <tt>src</tt>-metadata like audio/video-files (mp3/mp4 e.g.)</t>
</table><blockquote><t>* = this is extending the <ereftarget="https://www.w3.org/TR/media-frags/#mf-advanced">W3C media fragments</eref> with (missing) playback/viewport-control. Normally <tt>#t=0,2</tt> implies setting start/stop-values AND starting playback, whereas <tt>#s=0&loop</tt> allows pausing a video, speeding up/slowing down media, as well as enabling/disabling looping.</t>
<t>The rationale for <tt>uv</tt> is that the <tt>xywh</tt> Media Fragment deals with rectangular media, which does not translate well to 3D models (which use triangular polygons, not rectangular) positioned by uv-coordinates. This also explains the absense of a <tt>scale</tt> or <tt>rotate</tt> primitive, which is challenged by this, as well as multiple origins (mesh- or texture).</t>
> NOTE: URI Template variables are immutable and respect scope: in other words, the end-user cannot modify `blue` by entering an URL like `#blue=.....` in the browser URL, and `blue` is not accessible by the plane/media-object (however `{play}` would work).
<li>the Y-coordinate of <tt>pos</tt> identifies the floorposition. This means that desktop-projections usually need to add 1.5m (average person height) on top (which is done automatically by VR/AR headsets).</li>
<li>set the position of the camera accordingly to the vector3 values of <tt>#pos</tt></li>
<li><tt>rot</tt> sets the rotation of the camera (only for non-VR/AR headsets)</li>
<li>after scene load: in case the scene (rootnode) contains an <tt>#</tt> default view with a fragment value: execute non-positional fragments via the hashbus (no top-level URL change)</li>
<li>after scene load: in case the scene (rootnode) contains an <tt>#</tt> default view with a fragment value: execute positional fragment via the hashbus + update top-level URL</li>
<li>in case of no default <tt>#</tt> view on the scene (rootnode), default player(rig) position <tt>0,0,0</tt> is assumed.</li>
<li>in case a <tt>href</tt> does not mention any <tt>pos</tt>-coordinate, the current position will be assumed</li>
In case of <tt>buttonA</tt> the end-user will be teleported to another location and time in the <strong>current loaded scene</strong>, but <tt>buttonB</tt> will <strong>replace the current scene</strong> with a new one, like <tt>other.fbx</tt>, and assume <tt>pos=0,0,0</tt>.</t>
<li>IF a <tt>#cube</tt> matches a custom property-key (of an object) in the 3D file/scene (<tt>#cube</tt>: <tt>#......</tt>) <b>THEN</b> execute that predefined_view.</li>
<li>IF scene operators (<tt>pos</tt>) and/or animation operator (<tt>t</tt>) are present in the URL then (re)position the camera and/or animation-range accordingly.</li>
<li>IF no camera-position has been set in <b>step 1 or 2</b> update the top-level URL with <tt>#pos=0,0,0</tt> (<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/navigator.js#L31]]">example</eref>)</li>
<li>IF a <tt>#cube</tt> matches the name (of an object) in the 3D file/scene then draw a line from the enduser('s heart) to that object (to highlight it).</li>
<li>IF a <tt>#cube</tt> matches anything else in the XR Word Graph (XRWG) draw wires to them (text or related objects).</li>
<t><tt>src</tt> is the 3D version of the <a target="_blank" href="https://www.w3.org/html/wiki/Elements/iframe">iframe</a>.<br/>
It instances content (in objects) in the current scene/asset, and follows similar logic like the previous chapter, except that it does not modify the camera.</t>
</table><t>Here's an ascii representation of a 3D scene-graph with 3D objects <tt>◻</tt> which embeds remote & local 3D objects <tt>◻</tt> with/out using filters:</t>
<t>An XR Fragment-compatible browser viewing this scene, lazy-loads and projects <tt>painting.png</tt> onto the (plane) object called <tt>canvas</tt> (which is copy-instanced in the bed and livingroom).<br/>
Also, after lazy-loading <tt>ocean.com/aquarium.gltf</tt>, only the queried objects <tt>fishbowl</tt> (and <tt>bass</tt> and <tt>tuna</tt>) will be instanced inside <tt>aquariumcube</tt>.<br/>
<blockquote><t>Instead of cherrypicking a rootobject <tt>#fishbowl</tt> with <tt>src</tt>, additional filters can be used to include/exclude certain objects. See next chapter on filtering below.</t>
<li>local/remote content is instanced by the <tt>src</tt> (filter) value (and attaches it to the placeholder mesh containing the <tt>src</tt> property)</li>
<li>by default all objects are loaded into the instanced src (scene) object (but not shown yet)</li>
<li><b>local</b><tt>src</tt> values (<tt>#...</tt> e.g.) starting with a non-negating filter (<tt>#cube</tt> e.g.) will (deep)reparent that object (with name <tt>cube</tt>) as the new root of the scene at position 0,0,0</li>
<li><b>local</b><tt>src</tt> values should respect (negative) filters (<tt>#-foo&price=>3</tt>)</li>
<li>the instanced scene (from a <tt>src</tt> value) should be <b>scaled accordingly</b> to its placeholder object or <b>scaled relatively</b> based on the scale-property (of a geometry-less placeholder, an 'empty'-object in blender e.g.). For more info see Chapter Scaling.</li>
<li><b>external</b><tt>src</tt> values should be served with appropriate mimetype (so the XR Fragment-compatible browser will now how to render it). The bare minimum supported mimetypes are:</li>
<li><tt>src</tt> values should make its placeholder object invisible, and only flush its children when the resolved content can succesfully be retrieved (see <ereftarget="#links">broken links</eref>)</li>
<li><b>external</b><tt>src</tt> values should respect the fallback link mechanism (see <ereftarget="#broken-links">broken links</eref></li>
<li>src-values are non-recursive: when linking to an external object (<tt>src: foo.fbx#bar</tt>), then <tt>src</tt>-metadata on object <tt>bar</tt> should be ignored.</li>
<li>an external <tt>src</tt>-value should always allow a sourceportation icon within 3 meter: teleporting to the origin URI to which the object belongs.</li>
<li>when the enduser clicks an href with <tt>#t=1,0,0</tt> (play) will be applied to all src mediacontent with a timeline (mp4/mp3 e.g.)</li>
<li>a non-euclidian portal can be rendered for flat 3D objects (using stencil buffer e.g.) in case ofspatial <tt>src</tt>-values (an object <tt>#world3</tt> or URL <tt>world3.fbx</tt> e.g.).</li>
<li><t>clicking an outbound ''external''- or ''file URI'' fully replaces the current scene and assumes <tt>pos=0,0,0&rot=0,0,0</tt> by default (unless specified)</t>
<li><t>navigation should not happen ''immediately'' when user is more than 5 meter away from the portal/object containing the href (to prevent accidental navigation e.g.)</t>
<li><t>URL navigation should always be reflected in the client URL-bar (in case of javascript: see [<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/dev/src/3rd/js/three/navigator.js">here</eref> for an example navigator), and only update the URL-bar after the scene (default fragment <tt>#</tt>) has been loaded.</t>
<li><t>In immersive XR mode, the navigator back/forward-buttons should be always visible (using a wearable e.g., see [<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/dev/example/aframe/sandbox/index.html#L26-L29">here</eref> for an example wearable)</t>
<li><t>make sure that the ''back-button'' of the ''browser-history'' always refers to the previous position (see [<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/href.js#L97">here</eref>)</t>
<li><t>ignore previous rule in special cases, like clicking an <tt>href</tt> using camera-portal collision (the back-button could cause a teleport-loop if the previous position is too close)</t>
<li><t>href-events should bubble upward the node-tree (from children to ancestors, so that ancestors can also contain an href), however only 1 href can be executed at the same time.</t>
<li><t>the end-user navigator back/forward buttons should repeat a back/forward action until a <tt>pos=...</tt> primitive is found (the stateless xrf:// href-values should not be pushed to the url-history)</t>
<t>End-users should always have read/write access to:</t>
<olspacing="compact">
<li>the current (toplevel) <b>URL</b> (an URLbar etc)</li>
<li>URL-history (a <b>back/forward</b> button e.g.)</li>
<li>Clicking/Touching an <tt>href</tt> navigates (and updates the URL) to another scene/file (and coordinate e.g. in case the URL contains XR Fragments).</li>
How does the scale of the object (with the embedded properties) impact the scale of the referenced content?<br/>
</t>
<blockquote><t>Rule of thumb: visible placeholder objects act as a '3D canvas' for the referenced scene (a plane acts like a 2D canvas for images e, a cube as a 3D canvas e.g.).</t>
</blockquote>
<olspacing="compact">
<li><b>IF</b> an embedded property (<tt>src</tt> e.g.) is set on an non-empty placeholder object (geometry of >2 vertices):</li>
</ol>
<ulspacing="compact">
<li>calculate the <b>bounding box</b> of the ''placeholder'' object (maxsize=1.4 e.g.)</li>
<li>hide the ''placeholder'' object (material e.g.)</li>
<li>instance the <tt>src</tt> scene as a child of the existing object</li>
<li>calculate the <b>bounding box</b> of the instanced scene, and scale it accordingly (to 1.4 e.g.)</li>
</ul>
<blockquote><t>REASON: non-empty placeholder object can act as a protective bounding-box (for remote content of which might grow over time e.g.)</t>
<li>ELSE multiply the scale-vector of the instanced scene with the scale-vector (a common property of a 3D node) of the <b>placeholder</b> object.</li>
<li>playposition is reset to framestart, when framestart or framestop is greater than 0 |</li>
</ul>
<t>| Example Value | Explanation |
|-|-|
| <tt>1,1,100</tt> | play loop between frame 1 and 100 |
| <tt>1,1,0</tt> | play once from frame 1 (oneshot) |
| <tt>1,0,0</tt> | play (previously set looprange if any) |
| <tt>0,0,0</tt> | pause |
| <tt>1,1,1</tt> | play and auto-loop between begin and end of duration |
| <tt>-1,0,0</tt> | reverse playback speed |
| <tt>2.3,0,0</tt> | set (forward) playback speed to 2.3 (no restart) |
| <tt>-2.3,0,0</tt> | set (reverse) playback speed to -2.3 ( no restart)|
| <tt>-2.3,100,0</tt> | set (reverse) playback speed to -2.3 restarting from frame 100 |</t>
<t>[[» example implementation|<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]">https://github.com/coderofsalvation/xrfragment/blob/main/src/3rd/js/three/xrf/t.js]</eref>]<br/>
<li>add a <tt>src: foo.mp3</tt> or <tt>src: bar.mp4</tt> metadata to a 3D object (<tt>cube</tt> e.g.)</li>
<li>to disable auto-play and global timeline ([[#t=|t]]) control: hardcode a [[#t=|t]] XR Fragment: (<tt>src: bar.mp3#t=0,0,0</tt> e.g.)</li>
<li>to play it, add <tt>href: #cube</tt> somewhere else</li>
<li>when the enduser clicks the <tt>href</tt>, <tt>#t=1,0,0</tt> (play) will be applied to the <tt>src</tt> value</li>
<li>to play a single animation, add href: #animationname=1,0,0 somewhere else</li>
</ol>
<blockquote><t>NOTE: hardcoded framestart/framestop uses sampleRate/fps of embedded audio/video, otherwise the global fps applies. For more info see [[#t|t]].</t>
</blockquote></section>
<sectionanchor="xr-fragment-filters"><name>XR Fragment filters</name>
<li>see <ereftarget="https://coderofsalvation.github.io/xrfragment.media/queries.mp4">an (outdated) example video here</eref> which used a dedicated <tt>q=</tt> variable (now deprecated and usable directly)</li>
</table><blockquote><t>NOTE 1: after an external embedded object has been instanced (<tt>src: https://y.com/bar.fbx#room</tt> e.g.), filters do not affect them anymore (reason: local tag/name collisions can be mitigated easily, but not in case of remote content).</t>
<t>NOTE 2: depending on the used 3D framework, toggling objects (in)visible should happen by enabling/disableing writing to the colorbuffer (to allow children being still visible while their parents are invisible).</t>
<blockquote><t>An example filter-parser (which compiles to many languages) can be <ereftarget="https://github.com/coderofsalvation/xrfragment/blob/main/src/xrfragment/Filter.hx">found here</eref></t>
<t>When predefined views, XRWG fragments and ID fragments (<tt>#cube</tt> or <tt>#mytag</tt> e.g.) are triggered by the enduser (via toplevel URL or clicking <tt>href</tt>):</t>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that ID (objectname)</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) matching that <tt>tag</tt> value</li>
<li>draw a wire from the enduser (preferabbly a bit below the camera, heartposition) to object(s) containing that in their <tt>src</tt> or <tt>href</tt> value</li>
<t>The obvious approach for this, is to consult the XRWG (<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</eref>), which basically has all these things already collected/organized for you during scene-load.</t>
<t><strong>UX</strong></t>
<olspacing="compact"start="4">
<li>do not update the wires when the enduser moves, leave them as is</li>
<li>offer a control near the back/forward button which allows the user to (turn off) control the correlation-intensity of the XRWG</li>
<blockquote><t>The XR Fragments does this by collapsing space into a <strong>Word Graph</strong> (the <strong>XRWG</strong><ereftarget="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</eref>), augmented by Bib(s)Tex.</t>
</blockquote><t>Instead of just throwing together all kinds media types into one experience (games), what about their tagged/semantical relationships?<br/>
<li>XR Fragments promotes (de)serializing a scene to a (lowercase) XRWG (<ereftarget="https://github.com/coderofsalvation/xrfragment/blob/feat/macros/src/3rd/js/XRWG.js">example</eref>)</li>
<li>XR Fragments primes the XRWG, by collecting words from the <tt>tag</tt> and name-property of 3D objects.</li>
<li>XR Fragments primes the XRWG, by collecting words from <strong>optional</strong> metadata <strong>at the end of content</strong> of text (see default mimetype & Data URI)</li>
<li>The XRWG should be recalculated when textvalues (in <tt>src</tt>) change</li>
<li>Applications don't have to be able to access the XRWG programmatically, as they can easily generate one themselves by traversing the scene-nodes.</li>
<li>The XR Fragment focuses on fast and easy-to-generate end-user controllable word graphs (instead of complex implementations that try to defeat word ambiguity)</li>
<li>Tags are the scope for now (supporting <ereftarget="https://github.com/WICG/scroll-to-text-fragment">https://github.com/WICG/scroll-to-text-fragment</eref> will be considered)</li>
<blockquote><t>both <tt>#john@baroque</tt>-bib and BibTex <tt>@baroque{john}</tt> result in the same XRWG, however on top of that 2 tages (<tt>house</tt> and <tt>todo</tt>) are now associated with text/objectname/tag 'baroque'.</t>
</blockquote><t>As seen above, the XRWG can expand <ereftarget="https://github.com/coderofsalvation/hashtagbibs">bibs</eref> (and the whole scene) to BibTeX.<br/>
This allows hasslefree authoring and copy-paste of associations <strong>for and by humans</strong>, but also makes these URLs possible:</t>
</table><blockquote><t><ereftarget="https://github.com/coderofsalvation/hashtagbibs">hashtagbibs</eref> potentially allow the enduser to annotate text/objects by <strong>speaking/typing/scanning associations</strong>, which the XR Browser saves to remotestorage (or localStorage per toplevel URL). As well as, referencing BibTags per URI later on: <tt>https://y.io/z.fbx#@baroque@todo</tt> e.g.</t>
</blockquote><t>The XRWG allows XR Browsers to show/hide relationships in realtime at various levels:</t>
Some pointers for good UX (but not necessary to be XR Fragment compatible):</t>
<olspacing="compact"start="9">
<li>The XR Browser needs to adjust tag-scope based on the endusers needs/focus (infinite tagging only makes sense when environment is scaled down significantly)</li>
<li>The XR Browser should always allow the human to view/edit the metadata, by clicking 'toggle metadata' on the 'back' (contextmenu e.g.) of any XR text, anywhere anytime.</li>
<li>respect multi-line BiBTeX metadata in text because of <ereftarget="#core-principle">the core principle</eref></li>
<li>Default font (unless specified otherwise) is a modern monospace font, for maximized tabular expressiveness (see <ereftarget="#core-principle">the core principle</eref>).</li>
<li>anti-pattern: hardcoupling an XR Browser with a mandatory <strong>markup/scripting-language</strong> which departs from onubtrusive plain text (HTML/VRML/Javascript) (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>anti-pattern: limiting human introspection, by abandoning plain text as first tag citizen.</li>
</ol>
<blockquote><t>The simplicity of appending metadata (and leveling the metadata-playfield between humans and machines) is also demonstrated by <ereftarget="https://visual-meta.info">visual-meta</eref> in greater detail.</t>
<li>lines beginning with <tt>@</tt> will not be rendered verbatim by default (<ereftarget="https://github.com/coderofsalvation/hashtagbibs#hashtagbib-mimetypes">read more</eref>)</li>
<li>the XRWG should expand bibs to BibTex occurring in text (<tt>#contactjohn@todo@important</tt> e.g.)</li>
</ul>
<t>By doing so, the XR Browser (applications-layer) can interpret microformats (<ereftarget="https://visual-meta.info">visual-meta</eref>
to connect text further with its environment ( setup links between textual/spatial objects automatically e.g.).</t>
<blockquote><t>for more info on this mimetype see <ereftarget="https://github.com/coderofsalvation/hashtagbibs">bibs</eref></t>
</blockquote><t>Advantages:</t>
<ulspacing="compact">
<li>auto-expanding of <ereftarget="https://github.com/coderofsalvation/hashtagbibs">hashtagbibs</eref> associations</li>
<li>out-of-the-box (de)multiplex human text and metadata in one go (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>no network-overhead for metadata (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>ensuring high FPS: HTML/RDF historically is too 'requesty'/'parsy' for game studios</li>
<li>rich send/receive/copy-paste everywhere by default, metadata being retained (see <ereftarget="#core-principle">the core principle</eref>)</li>
<li>netto result: less webservices, therefore less servers, and overall better FPS in XR</li>
</ul>
<blockquote><t>This significantly expands expressiveness and portability of human tagged text, by <strong>postponing machine-concerns to the end of the human text</strong> in contrast to literal interweaving of content and markupsymbols (or extra network requests, webservices e.g.).</t>
</blockquote><t>For all other purposes, regular mimetypes can be used (but are not required by the spec).<br/>
</t>
</section>
<sectionanchor="url-and-data-uri"><name>URL and Data URI</name>
<t>The enduser will only see <tt>welcome human</tt> and <tt>Hello friends</tt> rendered verbatim (see mimetype).
The beauty is that text in Data URI automatically promotes rich copy-paste (retaining metadata).
In both cases, the text gets rendered immediately (onto a plane geometry, hence the name '_canvas').
The XR Fragment-compatible browser can let the enduser access visual-meta(data)-fields after interacting with the object (contextmenu e.g.).</t>
<blockquote><t>additional tagging using <ereftarget="https://github.com/coderofsalvation/hashtagbibs">bibs</eref>: to tag spatial object <tt>note_canvas</tt> with 'todo', the enduser can type or speak <tt>#note_canvas@todo</tt></t>
</blockquote></section>
<sectionanchor="xr-text-example-parser"><name>XR Text example parser</name>
<t>To prime the XRWG with text from plain text <tt>src</tt>-values, here's an example XR Text (de)multiplexer in javascript (which supports inline bibs & bibtex):</t>
<blockquote><t>when an XR browser updates the human text, a quick scan for nonmatching tags (<tt>@book{nonmatchingbook</tt> e.g.) should be performed and prompt the enduser for deleting them.</t>
<li>in case of <tt>src</tt>: nesting a copy of the embedded object in the placeholder object (<tt>embeddedObject</tt>) will not be replaced when the request fails</li>
<blockquote><t>due to the popularity, maturity and extensiveness of HTTP codes for client/server communication, non-HTTP protocols easily map to HTTP codes (ipfs ERR_NOT_FOUND maps to 404 e.g.)</t>
<blockquote><t>This would hide all object tagged with <tt>topic</tt>, <tt>courses</tt> or <tt>theme</tt> (including math) so that later only objects tagged with <tt>math</tt> will be visible</t>
</blockquote><t>This makes spatial content multi-purpose, without the need to separate content into separate files, or show/hide things using a complex logiclayer like javascript.</t>
<li><ereftarget="https://bibtex.eu/fields">BibTex</eref> when known bibtex-keys exist with values enclosed in <tt>{</tt> and <tt>},</tt></li>
</ul>
<t><strong>ARIA</strong> (<tt>aria-description</tt>) is the most important to support, as it promotes accessibility and allows scene transcripts. Please start <tt>aria-description</tt> with a verb to aid transcripts.</t>
<blockquote><t>Example: object 'tryceratops' with <tt>aria-description: is a huge dinosaurus standing on a #mountain</tt> generates transcript <tt>#tryceratops is a huge dinosaurus standing on a #mountain</tt>, where the hashtags are clickable XR Fragments (activating the visible-links in the XR browser).</t>
</blockquote><t>Individual nodes can be enriched with such metadata, but most importantly the scene node:</t>
<t>The addressibility of XR Fragments allows for unique 3D-to-text transcripts, as well as an textual interface to navigate 3D content.<br/>
Spec:<br/>
<Br></t>
<olspacing="compact">
<li>The enduser must be able to enable an accessibility-mode (which persists across application/webpage restarts)</li>
<li>Accessibility-mode must contain a text-input for the user to enter text</li>
<li>Accessibility-mode must contain a flexible textlog for the user to read (via screenreader, screen, or TTS e.g.)</li>
<li>The <tt>back</tt> command should navigate back to the previous URL (alias for browser-backbutton)</li>
<li>The <tt>forward</tt> command should navigate back to the next URL (alias for browser-nextbutton)</li>
<li>A destination is a 3D node containing an <tt>href</tt> with a <tt>pos=</tt> XR fragment</li>
<li>The <tt>go</tt> command should list all possible destinations</li>
<li>The <tt>go left</tt> command should move the camera around 0.3 meters to the left</li>
<li>The <tt>go right</tt> command should move the camera around 0.3 meters to the right</li>
<li>The <tt>go forward</tt> command should move the camera 0.3 meters forward (direction of current rotation).</li>
<li>The <tt>rotate left</tt> command should rotate the camera 0.3 to the left</li>
<li>The <tt>rotate left</tt> command should rotate the camera 0.3 to the right</li>
<li>The (dynamic) <tt>go abc</tt> command should navigate to <tt>#pos=scene2</tt> in case there's a 3D node with name <tt>abc</tt> and <tt>href</tt> value <tt>#pos=scene2</tt></li>
<li>The <tt>look</tt> command should give an (contextual) 3D-to-text transcript, by scanning the <tt>aria-description</tt> values of the current <tt>pos=</tt> value (including its children)</li>
<li>The <tt>do</tt> command should list all possible <tt>href</tt> values which don't contain an <tt>pos=</tt> XR Fragment</li>
<li>The (dynamic) <tt>do abc</tt> command should navigate/execute <tt>https://.../...</tt> in case a 3D node exist with name <tt>abc</tt> and <tt>href</tt> value <tt>https://.../...</tt></li>
<t>The only dynamic parts are <ereftarget="https://www.w3.org/TR/media-frags/">W3C Media Fragments</eref> and <ereftarget="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</eref>.<br/>
<t><strong>Q:</strong> Why is everything HTTP GET-based, what about POST/PUT/DELETE HATEOS<br/>
<strong>A:</strong> Because it's out of scope: XR Fragment specifies a read-only way to surf XR documents. These things belong in the application layer (for example, an XR Hypermedia browser can decide to support POST/PUT/DELETE requests for embedded HTML thru <tt>src</tt> values)</t>
<strong>A:</strong> This is out of scope as it unhyperifies hypermedia, and this is up to XR hypermedia browser-extensions.<br/>
Historically scripting/Javascript seems to been able to turn webpages from hypermedia documents into its opposite (hyperscripted nonhypermedia documents).<br/>
In order to prevent this backward-movement (hypermedia tends to liberate people from finnicky scripting) XR Fragment uses <ereftarget="https://www.w3.org/TR/media-frags/">W3C Media Fragments</eref> and <ereftarget="https://www.rfc-editor.org/rfc/rfc6570">URI Templates (RFC6570)</eref>, to prevent unhyperifying itself by hardcoupling to a particular markup or scripting language. <br/>
XR Fragments supports filtering objects in a scene only, because in the history of the javascript-powered web, showing/hiding document-entities seems to be one of the most popular basic usecases.<br/>
Doing advanced scripting & networkrequests under the hood are obviously interesting endavours, but this is something which should not be hardcoupled with XR Fragments or hypermedia.<br/>
This perhaps belongs more to browser extensions.<br/>
Non-HTML Hypermedia browsers should make browser extensions the right place, to 'extend' experiences, in contrast to code/javascript inside hypermedia documents (this turned out as a hypermedia antipattern).</t>
<td>some resource at something somewhere via someprotocol (<tt>http://me.com/foo.glb#foo</tt> or <tt>e76f8efec8efce98e6f</tt><ereftarget="https://interpeer.io">see interpeer.io</eref>)</td>
</tr>
<tr>
<td>URL</td>
<td>something somewhere via someprotocol (<tt>http://me.com/foo.glb</tt>)</td>
<td>simple tagging/citing/referencing standard for plaintext</td>
</tr>
<tr>
<td>BibTag</td>
<td>a BibTeX tag</td>
</tr>
<tr>
<td>(hashtag)bibs</td>
<td>an easy to speak/type/scan tagging SDL (<ereftarget="https://github.com/coderofsalvation/hashtagbibs">see here</eref> which expands to BibTex/JSON/XML</td>